While this article is interesting, it does start from a pretty strong assumption: that the problem is specific to one branch (or family of branches) of Academia.
I remember at least two similar scandals in very much non-social-justice-related branches of Academia. One was in postmodern studies, if I recall correctly, the other one I think in epistemology. In one case, the paper had been written to deliberately mean nothing, in the other one, I seem to remember that the papers had been generated with Markov chains.
So, based on the same data, my personal conclusion is that some fields of Academia are vulnerable to bullshit, whether it's sophistry or simply using the right lingo. It is a pretty bad sign, and I suspect that it is correlated with fields in which results are hard to check/reproduce.
It may also be correlated with ideology (I remember that this was the accusation towards postmodern studies), which doesn't mean that it is correlated with any specific ideology.
In other words: promising research, but the starting hypothesis and its possible limitations need to be expressed more clearly, and much more data needed.
I studied Sociology in Spain. I'd say that this problems are correlated with almost anything related with the school of Frankfurt / critical theorists. Not only do they have this problems, but professors affiliated with this school of thought were incredibly pervasive, manipulative and conspirative against other professors and students that didn't agree with them or had other worldviews.
I had feminist theory as subject. I remember one day we were talking about power dynamics in sex and all that Jazz. The professor said something along the lines that the state should teach people which sex positions were misogynist. She asked the students for opinions and I said that I wasn't cool with the state watching what people does in their bedroom. Exposed my arguments, and she was out of her mind, calling me a "neoliberal" in front of my peers, and even talking about me with other classes (other students told me).
That was the first time that I was aware of this behavior. Later I found that this group of people maneuvered to make life difficult to other professors, sabotaging their research etc etc.
Thankfully there was the other bunch of people that made it tolerable. Statistics I, II, III, Research Methodology I & II, Macroeconomics I & II, Anthropology (he was clearly a conservative) and some others I forgot.
Maybe all this transfers to publishing. Speaking with students from other faculties in Spain and other countries, I've heard similar stories.
To this day I think studying sociology was a waste of my time. It was interesting, and having statistics and research background made me employable, but to be honest the atmosphere was barely tolerable.
But that's the whole concept of social studies: identify grievances versus a given value system that is treated as dogma, and then threaten and shame society into (sometimes draconian) measures to fix those grievances.
Not at all. "Social studies" AKA Social Sciences are divided into, say, two ethical perspectives.
In one sides is Marx and all the marxist school. This people believes that the social scientist has to have the desire of changing society, and he should act upon it.
In the other side is pretty much everyone else (highly simplifying here), mostly people that believes that actual science has to be separated of any activism, and that achieving the highest degree of neutralism and detachment is desirable.
While is not hard to detach science from activism, it is hard to not comment on what the actual desirable society would be, if you are involved in the field.
IIRC you have examples of this in functionalism and simbolic interactionism schools, that had both descriptive and inferential studies, but didn't mix activism on it, as far as I can tell.
I think you're mistaken, largely in that you're construing the ideology being targeted too narrowly. Note the bit at the beginning about "critical constructivism" -- this is certainly not only about Social Justice; that's just where it's the most obvious.
In short, this isn't just about certain fields being vulnerable to sophistry and bullshit. This is about a number of fields or subfields which are dominated by (not merely vulnerable to) bullshit and sophistry for a common reason, namely, that they have, for what could be called ideological reasons, abandoned the preconditions that make actual truth-seeking possible. Social Justice and the postmodern studies are both part of this; they are related, contrary to what you claim.
> I think you're mistaken, largely in that you're construing the ideology being targeted too narrowly. Note the bit at the beginning about "critical constructivism" -- this is certainly not only about Social Justice; that's just where it's the most obvious.
Good point. This appeared at the start of the OP, then basically disappeared from both the OP and my mind.
> Social Justice and the postmodern studies are both part of this; they are related, contrary to what you claim.
I'll admit to my lack of knowledge of both fields, but I suspect that you are right, insofar as they can probably be described as two applications of "critical constructivism".
So, this does remove one of my criticisms on the OP. I'm still not happy about the phrasing (in particular the title) that turns it into a clickbait-let's-bash-at-social-justice-related-publications-without-looking-at-the-bigger-picture, though.
> I remember at least two similar scandals in very much non-social-justice-related branches of Academia. One was in postmodern studies, if I recall correctly, the other one I think in epistemology. In one case, the paper had been written to deliberately mean nothing, in the other one, I seem to remember that the papers had been generated with Markov chains.
The article is very clear in it's methodologies and is interacting with the community in a long-term peer-reviewed manner. They were invited to peer-review other papers. This is not one substandard paper that was published in a no-name journal.
If your field of academia is this vulnerable to nonsense, perhaps we should start considering the field not academic and purge it from out institutions of learning.
> If your field of academia is this vulnerable to nonsense, perhaps we should start considering the field not academic and purge it from out institutions of learning.
I seem to remember that the non sensical paper was in the top journal of postmodern studies. I don't remember details about the Markov chain papers.
Also, I have seen bullshit papers in established journals in computer science. Not many, but then, computer science results are somewhat harder to fake than humanities, and I don't think there were many social engineering attacks on CS journals such as the one described in the OP.
So, yes, it seems that the OP describes a real problem. For the reasons mentioned above, I do not believe it is limited to social-justice-related-branches.
You are right that this should serve as a wake-up call. Personally, I believe that such wake-up calls are useful for a domain, once in a while. The ball is now on the side of editors. If they manage to raise the bar for accepting papers, their field will have benefited from this work.
It is also possible, as you suggest, that the field is not of academic interest. As I have read exactly 0 articles from the publications targeted by the OP, I have no factual base to discuss this.
1. Lack of rigor. This just means poor quality papers get published.
2. Ideological pre-conclusion. This means the field is dominated by a moralistic ideology, instead of being about neutral truth-seeking as it claims. This means that papers which conform to the ideology's pre-conclusions get published, while those that contradict do not, with insufficient regard to rigor in either case. It also means scientists/students with non-conforming beliefs are purged from the field as a whole, again, regardless of actual ability.
These are related problems but not exactly the same. Number 1 doesn't imply number 2, but number 2 requires number 1. The hoax attempts to demonstrate both.
I seem to remember articles from the early 1980s pointing out that some academic journals (literary criticism, ...) seem to be full of nonsense. Perhaps Martin Gardner wrote one such article, but I could be completely wrong about that. So, yes, I agree, it's not obvious that this is linked to a specific ideology.
> Because open, good-faith conversation around topics of identity such as gender, race, and sexuality (and the scholarship that works with them) is nearly impossible, our aim has been to reboot these conversations.
...
> a clear reason to look at the identitarian madness coming out of the academic and activist left
I'm going to revisit the article later, because the first few paragraphs threw up many red flags, coming across as someones with a chip on their shoulder that sought out to "prove" themselves correct. Hopefully this is a misunderstanding on my part, but I recommend others attempting persuasion to avoid such signaling.
To be flippant, it sounds like you're quick to assume bad faith, which would seem to support the first point you quote.
The experiment they ran was to try to get terrible papers published in authoritative journals in certain fields. If they succeeded, this would point towards a serious problem with established practice in those fields; their suspicion of such a problem might be labeled a "chip on their shoulder" or a "hypothesis". According to what they say, they succeeded.
As for it being "identitarian madness" and "coming out of the academic and activist left"—do you think either of those parts is false or is irrelevant to what they're studying? If not, how would you suggest they talk about it?
> you're quick to assume bad faith, which would seem to support the first point you quote
I don't think that follows. If you start with "there's no point in good faith" (paraphrase), you can't say that someone taking offense at their approach has proven their point. It doesn't DISPROVE it, and I'm not claiming I have, but it certainly doesn't prove the point, because they themselves weren't doing anything that relies on that point (which is why they used it as justification).
> their suspicion of such a problem might be labeled a "chip on their shoulder" or a "hypothesis"
Suspicion is indeed a hypothesis. Conviction is more "chip on their shoulder". At that point of the document the reader is open to the hypothesis, but has not yet seen any evidence.
> do you think either of those parts is false or is irrelevant to what they're studying?
I think if someone is trying to persuade me, extravagant up-front declarations, name calling, and labels are not the way to establish your credentials as an objective judge. I honestly am in no position to know if their suspicions are false - presumably, someone reading the paper seeking to expand their knowledge would likewise be someone ignorant or at least unconvinced (Why else read it?) Coming across as someone unreliable due to bias is not an effective way to convince that audience - it's a way to build an echo chamber.
That said, I said I wanted to revisit and plan to, once my initial impression has died down to reduce my own bias.
Would it point at a serious problem? I think a lot of people misunderstand how review works. The problem exists if these papers become highly cited and influential. Plenty of garbage gets published, even in good outlets. This approach takes advantage of the public's incorrect understanding of the expectations of peer review.
It also doesn't help that the article seems to deliberately hide which papers actually got accepted.
I think natural sciences will be self correcting in this respect. If something new and important has been discovered, people will want to use it / build on it and thus it will have to be replicated.
They literally have a video in the introduction which documents them deliberately submitting papers that they've engineered to have problems in bad faith to try and manufacture evidence that bad studies can get into journals. They establish that they act in bad faith.
While authors ideally should act in good faith and only submit good papers, the whole idea of having journals and editors and peer review is to filter out bad papers. The problem here is not some people "deliberately submitting papers" which "have problems", the problem is that those papers were accepted by "academic" journals. This is not "manufacturing" evidence, this is simply evidence.
The bad guys here are not random authors but the journal editors and peer reviewers accepting this stuff.
Well firstly, I don't why you're putting "academic" in quotation marks, it would be a wild assertion to try and claim these are anything other than academic journals.
The problem with this article is that what it's methodology tests whether it's possible to get low quality papers into certain academic journals, but it's stated theory and conclusion are that academic journals are deliberately publishing bad science to push an agenda that this article calls "Grievance Studies".
The bad guys here are the random authors who have designed a bad experiment, deliberately set out to undermine academic institutions, and then rather than presenting real evidence have written a political hit piece.
> Well firstly, I don't why you're putting "academic" in quotation marks, it would be a wild assertion to try and claim these are anything other than academic journals.
Not really, given the miserable quality of the papers they publish. Maybe ‘newspaper‘ or ‘journal’ might be ok, but ‘academic journal‘ requires a certain amount of quality.
> The problem with this article is that what it's methodology tests whether it's possible to get low quality papers into certain academic journals, but it's stated theory and conclusion are that academic journals are deliberately publishing bad science to push an agenda that this article calls "Grievance Studies".
Academic journals publish papers deliberately. They don’t ‘accidentally’ publish bad papers, I can assure you that nobody accidentally clicks the ‘publish’ button. Whatever further agenda the article may or may not push pales in comparison to the actual problems it uncovers.
> The bad guys here are the random authors who have designed a bad experiment, deliberately set out to undermine academic institutions, and then rather than presenting real evidence have written a political hit piece.
Claiming these people undermine academic institutions is like blaming your upstairs neighbour stomping on the ground for your house of cards imploding. If a journal accepts and publishes such submissions, it was never an academic institution to begin with. The completely sufficient evidence for this is the acceptance and publication of those papers.
> Academic journals publish papers deliberately. They don’t ‘accidentally’ publish bad papers, I can assure you that nobody accidentally clicks the ‘publish’ button. Whatever further agenda the article may or may not push pales in comparison to the actual problems it uncovers.
Academic journals certainly publish papers (good or bad) deliberately because the peer reviewers consider them of a sufficient quality and interest for the domain. It has always been known that it's possible to get low quality papers in certain journals – sometimes good ones, if one puts sufficient efforts – in pretty much all domains.
Concluding that these publications are accepted "to push an agenda" requires a leap of logic. In the past, it has been shown that simply mimicking the language (using Markov chains) was pretty much sufficient to get into some journals.
> > The bad guys here are the random authors who have designed a bad experiment, deliberately set out to undermine academic institutions, and then rather than presenting real evidence have written a political hit piece.
>
> Claiming these people undermine academic institutions is like blaming your upstairs neighbour stomping on the ground for your house of cards imploding. If a journal accepts and publishes such submissions, it was never an academic institution to begin with. The completely sufficient evidence for this is the acceptance and publication of those papers.
On this point, I mostly agree with claudius. While the authors have written something that looks much more like a poitical clickbait than actual research findings, they have exposed weaknesses in the process or reviewers of some journals.
These journals now have the ability to fix their mess, which will make everybody the winner.
So if you suspect a problem with epistemology, you can't conduct an experiment, because to do so is "bad faith"?
Do you display bad faith in the theory of gravity by conducting an experiment? Is the whole scientific process a bad faith attempt to determine objective truth by underhanded manipulation of the universe?
You can't conduct an experiment to establish why gravity happens. This experiment is like someone dropping a tennis ball and a bowling ball from the tower of pisa and then writing a conclusion that gravity has a liberal bias against bowling balls because it slows them down to the same speed as a tennis ball.
It's not bad faith in case of the premise that they were trying to prove, namely that you can write any old rubbish and get it published if it has the right agenda.
They got an article accepted in Hypatia, the flagship journal of feminist philosophy. I’ve never heard of any of the other journals and have no idea if they’re held in low regard generally but Hypatia is as good as it gets in that field.
They may have had an almighty chip on their shoulders but they do seem to have proved themselves correct. They got articles accepted in the Journal of Poetry Therapy, Affilia, Fat Studies, Gender, Place and Culture, Sexuality and Culture, Sex Roles and Hypatia.
Yeh, it's kind of redicoulius how much of a bias there is. It sounds like some Sam Harris or Jordan Peterson disciples going out to battle for them. It's been established that journals of all kind, including physics are susceptable to accepting papers that aren't up to snuff. The is even an app which will write such papers for you.
It's just a hit piece. I'm more interested in the reason why to be honest.
It is a shame to see you downvoted here for linking to that well-argued article. I know Jordan Peterson seems to have quite an impact on young men, but for older folks who have been around for a while, it is easy to see him as just this era’s Carlos Castaneda or other shallow (and probably rather ephemeral) spiritual guru.
I dont think its a shame at all - that article is not the least bit well-argued. It goes pages before it quotes JP or in any way tries to debate his views.
Here are some excepts from this "well-argued article":
>Jordan Peterson appears very profound and has convinced many people to take him seriously. Yet he has almost nothing of value to say.
>They are half nonsense, half banality. In a reasonable world, Peterson would be seen as the kind of tedious crackpot that one hopes not to get seated next to on a train.
>A more important reason why Peterson is “misinterpreted” is that he is so consistently vague and vacillating that it’s impossible to tell what he is “actually saying.”
Actually quotes JPs book, in what the author clearly thinks the reader will find is a confusing way (i dont) >But here I am already giving Peterson’s work a more coherent summary than it actually deserves.
Note that i did not pull those out of context; there is no supporting evidence. the author just... says stuff.
This is the opposite of well-written. It is ad-hominem after playground insult. there is no point counter-point or debate format of any kind. the author basically assumes you think JP is not meaningful and then makes fun of the way he speaks.
But can you prove it wrong? I think you 'll find a lot of such comically simplistic "conjoined triangles of success" in the arts.
More seriously, if you re interested to understand the people who take him seriously, pointing out those caricatures is the wrong way to go about it. He creates a narrative that has emotional appeal and is backed up by science. He does get his facts straight in his arguments - he's not using pseudoscience. The psychology literature leaves just enough cracks for his theories to fall through undamaged. Thats why those who oppose to him have to use aesthetic arguments like this.
> Jordan Peterson’s popularity is the sign of a deeply impoverished political and intellectual landscape…
Yes, but that only proves he's the best public intellectual. In a world where everyone else is worse, why would you not listen to him.
The article also says he speaks "as unintelligibly as possible", which couldn't be further from truth, so i don't know if it could be trusted. Halfway in the page it becomes obvious it's a hit piece to promote the author's book.
I agree that peterson is overrated, but not for those reasons.
> Halfway in the page it becomes obvious it's a hit piece to promote the author's book.
Uh... that's a joke.
FTA:
> By the way, an amusing aside: a few years ago my colleague Oren Nimni and I wrote a parody of nonsensical academic grand theory called Blueprints for a Sparkling Tomorrow...
I'm not sure what to make of your inability to distinguish a parody of Grand Theory hucksterism from the real thing. Is this some kind of meta-commentary?
I couldn't get through the preface with a straight face. There's something really meta about a study which is attacking ideology within academia that is clearly driven by the authors own ideological bias, which presumably had not find a welcome audience in these fields. Labeling a wide swath of the humanities as "grievance studies" belies their ultimate aim, which is to discredit them on an ideological basis rather than on the merits of the arguments.
Ok, but let's not discount the work ad-hominem. Are the fields subject to problems of behavior inconsistent with principles the fields typically consider important?
If one important principle is fairness between groups of people, then substitution of one group for another in matters of treatment shouldn't alter the results without consideration. If you wouldn't treat group A this way, then it would be inconsistent to accept treating group B the same way without discussing why. And that discussion should be held to some rigor. Without it, we're just talking about group A "vs" group B, instead of principles of how all groups should be treated.
Yes, clearly grievances of each group are worth study and analysis. But it deters from the significance of their results if they accept the creation of isomorphic grievances.
[spoken, rather obviously, as a computer scientist peering into this field of study from the outside]
You're setting up an entirely non-falsifiable argument.
Do multiple publications of obvious nonsense in peer-reviewed journals prove nothing to you? How do you know there are no other authors doing the same thing with different intent? What level of nonsense would have to get through for you to question the validity field?
That is not only an issue in academics, but in society as a whole. There is no culture of moderate discussion and respect anymore. Arguments are only valid if they fit the ideology. That is highly concerning.
This has happened before. Consider this: Current diversity politics is diversity of skin color or gender, but not of though. You are only allowed to think in a certain way if you want to belong to the group. This is not really diversity, it's superficial diversity.
Not convinced this study is ethical, but I do find it humorous.
7/20 bullshit studies published is a lot of studies, full stop. The methodology with which they pursued the publishing of the studies is telling as well.
I don't think it discredits these fields enough to re-brand them as "Grievance Studies" but I do think a 35% success rate by pandering to the journal's obvious biases says something important about the quality of discourse in these fields.
The most impressive hoodwinking was of Hypatia which is the top journal in feminist philosophy. Obviously one sees why other philosophers might be dismissive of it now.
> It can also be difficult to get publications in Hypatia taken seriously by one’s department: I know of at least one junior faculty member who was told that she needed to get some more ‘mainstream’ publications, despite her publication in Hypatia, the top journal in her field
If you find yourself in strong agreement with this article, it'd probably be worthwhile doing some introspection what's actually bothering you -- the fact that some publications (apparently) lack academic rigor, or that you oppose them ideologically.
Based on the tone of the original article, it's pretty clear what's motivating them. Hard to imagine them summoning the same outrage if they'd managed to sneak in some bogus papers to a physics or medical journal (which, by they way, has happened.)
With due respect, ideology is ultimately the only reason that anything gets done. RMS would never have created GNU, with all of its fabulously useful result, had he merely thought that existing tools were of insufficient quality.
The whole point of academia is that people who oppose you are going to take their best shot at ripping your work to shreds. If your work can withstand their worst, we'll all know that it's (probably) really good stuff.
Alternatively, if your work blows over in a light wind, better that everyone should know as soon as possible.
We can all agree that cultishness is a general human failing and leftist cults are as likely to form as Branch Davidians. Historically, some of those cults have caused immense harm.
Nevertheless, this article strikes me as being deliberately written for its own cult members. Its air of injured rationality is exhibit A in grievance studies, the white people version.
Is there some vast underground of fence-sitters waiting to be convinced by their argument? I doubt it. Instead, the article is catnip for those who share the same grievances as the writers. Sure, they have one more data point that the other side is gullible or stupid.
I am more than willing to admit that portions of academia are cultish with guru figures who can easily destroy your life.
However, all one needs to do is look outside the ivory tower and ask if there are real injustices in the world, and the answer isn't hard to find. The vast majority of activists I know, women and men fighting for the rights of women, people of color, animals and trees have no interest in the battles over grievance studies.
This is not an article written in good faith. Which is to say, it's propaganda.
No. It's an article that's pointing out a problem with a field of study that points out problems. Which is not just a valid concern, but rather critical to keeping that field of study from the problems of groupthink.
> However, all one needs to do is look outside the ivory tower and ask if there are real injustices in the world, and the answer isn't hard to find. The vast majority of activists I know, women and men fighting for the rights of women, people of color, animals and trees have no interest in the battles over grievance studies.
> This is not an article written in good faith. Which is to say, it's propaganda.
I don't know what you're trying to say here. It sounds like you're saying that because you don't care about the problems listed in the article, and that there are other problems to talk about instead, that by focusing on these problems in the article, it must have been written in bad faith?
This looks like Sokol version 2.0. A bunch of hoax papers designed to demonstrate the lack of rigor in what some of the Social Science journals accept.
I'm not a big fan of this article at all. Firstly, if you want to try and actually persuade people of a point don't preface your entire discussion by deciding to use an epithet to describe what you're discussing. How can I take seriously someone who goes straight in and goes "This is grievance studies"? It seems kind of funky how meta it is to adopt that epithet and then go on to make the accusation that the people you are talking about are acting in bad faith.
Reading through the methodology I'm really struggling to see what the actual value of what they're doing is. For example - the Dog Park study:
>What if we write a paper saying we should train men like we do dogs—to prevent rape culture? Hence came the “Dog Park” paper.
If you want to assert that this is a nutty idea - please back up that claim. Because I feel fairly comfortable arguing that we can learn a lot about social constructs by studying how they differ in other species. There's nothing wrong with that concept. Maybe there's something glaringly wrong in that particular paper, but these authors do nothing to actually demonstrate that.
Later on that go into detail with this. That paper is meant to have implausible statistics. Ok, what is peer review meant to do? Reject studies that have implausible findings? We have a word for that - Bias. The correct response to a paper with implausible findings is to publish it and let people follow up and do their own studies. That is the system working correctly.
As for advocating for outlandish actions as a result of the study. Well authors are welcome to suggest anything that like as a result of their research. Publishing it doesn't imply endorsement. It implies "This is a real study".
So let's talk science. I have a confounding variable - bad studies got published because it's better to publish lots and let the academic community decide the merits of the study than to censor what you think is either implausible or has troubling parts. That explains all the results of this study without the wildly insulting and vaguely conspiracy theorist assertion that these papers get published because of an agenda on the part of academics.
It's a little telling that a bunch of their most lurid examples from the main text turn out to have been rejected; they seem to want to spin a narrative from the nice things reviewers said in the process of rejecting those papers. For instance, the "Masturbation" paper wasn't even "reject and resubmit"; it was a straight reject. If you don't read all the way to the end, you can miss that fact easily.
(Amusingly, they also print quotes from the same reviewer twice on that paper, creating the impression that multiple reviewers responded positively to it; I'm not going to take the time to audit all their peer quotes, but I'll bet that's a trick they applied repeatedly.)
Another example: Hypatia is apparently quite well known in feminist studies. The authors got a paper accepted in Hypatia. The authors also suggested that Hypatia was "surprisingly warm" to a paper (actually, it turns out, an essay) that suggested that white men be forced to sit on the floor in chains in classrooms. What the write-up doesn't tell you is that that's not the paper Hypatia accepted; they accepted a much more banal paper about the legitimacy of academic hoaxes.
Another example: they highlight (and so does their press coverage) a paper that spins a section of Mein Kampf into a critical race/gender theory argument. What they don't say up front is that they tried this three times, once with a relatively serious journal, once with a journal with an impact factor of 1.1, and finally with a 0.8. It's the 0.8 that accepted them; the others rejected. In fact, academic science --- STEM included --- is littered with bullshit journals, but generating that fact as the result of their "experiment" wouldn't have made for such an interesting story.
Kieran Healy makes an interesting point about this on Twitter: it's probably easier to get totally fake papers accepted than real ones. Recall Matt Might's cartoon essay about PhD theses and the tiny dent any one paper makes at the boundary of a science. If you aren't constrained by reality, it's easy to generate a paper that seems impactful. Meanwhile, the whole enterprise of peer review relies on good faith: reviewers are donating their time, and for every paper published, there are many many more that had to be read and rejected.
When people criticize post-structuralist academic studies in good faith, one of the most common counter-criticism is that they simply don't have sufficiently deep understanding of the subject, of its terminology, history and so on. At the very least these kinds of pranks demonstrate that no one has "sufficiently deep understanding" of this buzzword soup. People who think they have some secret knowledge or intellectual superpowers to parse the unparseable simply delude themselves.
The papers that got accepted aren't "buzzword-soup". They're intelligible and in some cases include plausible methodological data. That's one of the things that's driving me so nuts about the prevailing narrative about this hoax: people think what these people did was "Sokal^2", but really it's log(Sokal).
The papers are exactly what one would expect from reading other types of postmodernist "critique". The whole reason reviewers missed the blatant absurdity of their thesis is because standard buzzwords of the field hide meaning. This is what Orwell warned us about in Politics and the English Language.
Respectfully, I think what you're doing here is taking what you know about the Sokal hoax and applying it directly to this hoax. It's hard to blame you, since that's what the media did as well. But, while the papers we're talking about were fabricated and often did make ludicrous arguments, they weren't "postmodernist critique", and, as I said, many of them included data and plausible methodology. Orwell wouldn't have much to say about them.
"Maybe there's something glaringly wrong in that particular paper, but these authors do nothing to actually demonstrate that."
He first states that the particular paper's flaw is the use of implausible statistics, then goes into more detail:
"There was also considerable silliness including claiming to have tactfully inspected the genitals of slightly fewer than 10,000 dogs whilst interrogating owners as to their sexuality (“Dog Park”),"
Is that silly? As an academic am I meant to be familiar with the number of dogs who frequent a particular park in a country I've never been to? As the reviewer of that paper what exactly are you suggesting, I accuse them of lying?
Academia is set up to tackle people who fabricate their results - the reputational damage would destroy most people's careers. But that mechanism is not some sort of fact-checking investigation by the peer reviewers.
Let me put the counter to you: That paper has been cited precisely 0 times according to publications website. In fact, the only references I can find to it are non-academic websites which generally trawl research for funny papers they can write jokey articles about. So what has this told us about Academia?
>Academia is set up to tackle people who fabricate their results - the reputational damage would destroy most people's careers. But that mechanism is not some sort of fact-checking investigation by the peer reviewers.
This is an important point, and slivym shouldn't be downvoted for making it. Many people outside academia seem to have unrealistic expectations of the peer review process. Reviewers can't, in most cases, verify experimental results. Ironically, this is especially true of the so-called "hard" sciences, where a typical experiment might take months or years of preparation and cost lots of money to carry out.
The only real protection against fabricated results, in any field, is the honesty of its practitioners. What we have here is a group of rather naive people discovering that it's easy to lie and get away with it.
> Many people outside academia seem to have unrealistic expectations of the peer review process
Disagree , and i think slivym has confused peer review with sloppy view, or perhaps peer review is indeed sloppy in their field. Yes, peer review is a bad system but it is not "nothing". Reviews won't redo the experiment but they may ask for a lot of work in reviews , they can be opinionated , disbelieving, and this is good, it keeps a certain baseline, and in my experience always improves the paper. You can easily tell if a manuscript that has been reviewed or first-submitted. It s not a matter of black and white: peer review sits somewhere in between but it's not black.
I didn't say that peer review was "nothing". I said that reviewers cannot usually verify the results of the experiments in the papers that they're reviewing.
For example, my (erstwhile) field is linguistics. Suppose that I review a paper about noun incorporation in Mohawk, and the author makes various claims about which kinds of nouns can and can't incorporate into which kinds of verbs. Not being an expert in Mohawk, I can't verify those claims. If the language in question is something less studied than Mohawk, it may be that there is no reviewer available who can verify the claims without making an impractical expenditure of time and effort.
At some point, you just have to rely on the fact that most people are not fundamentally dishonest. It's like that in any field. Reviewers work for free, and aren't going to spend months verifying the results of a complex experiment.
yeah i expanded on that. Re-doing the experiment is not the only way to improve the results, and it would be dangerous to claim that if you can't outright falsify a paper it should be accepted. When so much science is being published by more people than any other time, the standards should always be raised. Trust is not to be assumed imho.
As you, say, reviewers won't (and most likely, can't) redo the experiment. For this reason, there is no real protection in the review process against people making results up.
If you didn't trust by default, you'd never publish anything.
> For this reason, there is no real protection in the review process against people making results up.
You can still sanity-check the results, even without redoing the experiment. For example if the average agreement to a certain question on a 1-5 scale among a cohort of 10 people is reported to be 3.26, you might want to ask for the raw data, because that average is only possible with fractional answers.
I recall a study looking at such impossible aggregate statistics leading to several retractions of articles whose data had been made up outright.
Similarly, when someone claims that "four men watched 2,328 hours of hardcore pornography over the course of a year and took the same number of Implicit Association Tests", you might realize that 2328 hours/(4*365) > 1 hour 36 minutes per day; and ask for the titles and duration of the porn allegedly watched, just to make sure that this extremely onerous experiment has actually been performed.
Note that the paper about that "experiment" was not accepted , but at least one reviewer actually recommended less data ("My first piece of feedback on how to make this hybrid article work is that they should remove the quantitative data."), perhaps due to a misunderstanding of sample sizes ("It makes no sense to undertake quantitative analysis for four people – when you flatten the detail out of a sample of four you’re not left with anything interesting.") — the real sample size is at least 2328.
I realize that peer review mostly doesn't operate at that level of scrutiny, but maybe it should. Checking the raw data requires slightly more work of both reviewers and honest authors, but increases the workload of dishonest authors from "make up a few numbers" to "make up as many numbers as if'd actually done the work and don't introduce statistical anomalies", shrinking the gap to "actually do the work".
So even though you need to trust authors a little, it's certainly possible to trust less. There is no perfect protection against academic dishonesty, but there could be better protections.
> Similarly, when someone claims that "four men watched 2,328 hours of hardcore pornography over the course of a year and took the same number of Implicit Association Tests", you might realize that 2328 hours/(4*365) > 1 hour 36 minutes per day; and ask for the titles and duration of the porn allegedly watched, just to make sure that this extremely onerous experiment has actually been performed.
I don’t see the point. The authors could easily respond with a long list of porn titles. (And as an unpaid reviewer with lots of real work to do, are you going to bother verifying every title in the long list?)
>Note that the paper about that "experiment" was not accepted
Then it's not a very good example to base your argument on.
More generally, virtually no-one understands statistics. Every field where statistical analysis is used routinely publishes papers that use bad statistical methods.
>I realize that peer review mostly doesn't operate at that level of scrutiny, but maybe it should.
What does the “should” even mean here? Do you think that reviewers who work for free “should” do even more work than they do already? Or that journals “should” force reviewers to do this (even though they have no mechanism for doing so)? There are practical limits to the amount of scrutiny any given paper can be subject to. It would suck if we needed to spend more time reviewing papers just because a bunch of assholes keep trying to get fake papers published.
>it's certainly possible to trust less.
Not really. You don't seem to realize that more scrutiny during the review process would require real people to give up more of their real time for free. You can't just snap your fingers and make that happen.
> are you going to bother verifying every title in the long list?
It should be enough to randomly sample a subset for verification, similar to probabilistic proof checking in cryptography.
>>Note that the paper about that "experiment" was not accepted
> Then it's not a very good example to base your argument on.
You're welcome to suggest a better example ;)
> You’re both wrong. You can’t treat 2328 observations from 4 subjects the same way as 2328 observations from 2328 subjects
You're right of course, but it really depends on how you want to generalize. Observing only 4 subjects makes it hard to estimate population variance and generalize to other subjects, but having 2328 observations of the same subject should give great insights into measurement reliability and changes over time, for those subjects.
> Do you think that reviewers who work for free “should” do even more work than they do already?
I think that reviewers should be compensated adequately for their work, ...
> Or that journals “should” force reviewers to do this (even though they have no mechanism for doing so)?
... by the journals, which can use some of the revenue they make selling subscriptions to enforce a quality standard for the papers they publish.
> It would suck if we needed to spend more time reviewing papers just because a bunch of assholes keep trying to get fake papers published.
Some assholes try and succeed at publishing fake papers, some of them potentially influencing important decisions, e.g. in medicine. You can of course decide that it's not worth the effort to try and stop them, but I feel that publishing fake results should be as hard as possible.
> You can't just snap your fingers and make that happen.
But I can argue on the internet about it. Maybe that doesn't change anything, but it makes me feel better.
You mean, I'm welcome to find supporting evidence for your argument? Shouldn't that be your responsibility?
>I feel that publishing fake results should be as hard as possible.
Making it "as hard as possible" would mean using almost all of the world's resources to try to stop fake results being published. If you really want to make the review process more rigorous, you need to present a concrete plan specifying (a) who's going to do the work and (b) who's going to pay for it.
If you can't do that, then what makes you so sure that the review process isn't already as rigorous as it can reasonably be, given the reality of human frailty and limited resources?
> You mean, I'm welcome to find supporting evidence for your argument? Shouldn't that be your responsibility?
Fair enough.
I had some trouble finding the original better example I had mentioned earlier ("I recall a study looking at such impossible aggregate statistics leading to several retractions of articles whose data had been made up outright."), but while trying to find it, I stumbled on another.
A study gets published finding a large effect, large enough to cause a replication attempt: "The effort to replicate the original study was successful in everything except the creation of the PSU-level structural stigma variable."
The suspected reason for this replication failure (imputation of missing data) turns out to be wrong when the original authors have someone check their code for data analysis, which is found to contain an error.
If that code had been checked during peer review (or at least afterwards, by including it with the publication), the effort would have been less than a full-blown replication attempt.
Simply checking reported means and sample sizes for consistency revealed mathematically impossible results in 50% of tested papers.
He goes to great lengths to stress that such inconsistencies don't necessarily imply fraud (some are honest mistakes), but the behavior of some of the contacted authors when asked for their data appears very sketchy.
Again, if there were a culture of looking at raw data and inspecting analysis code during peer review, those studies reporting obviously incorrect results would not have been published, saving everyone who relies on such studies a lot of trouble.
So, who's going to do that work and who pays for it? I'd be surprised if I managed a working plan on the first attempt, but here's my proposal:
- Authors prepare their data and code together with instructions in such a way that an expert in their field can work with them without having to ask the authors for additional information. It should be attached to the paper as supplementary material. If the data is privacy-sensitive, it should at least be made available to reviewers to check that the results follow from the data. Who pays for it: whoever pays the authors to be writing papers in the first place.
- Reviewers do that sanity-check of running the code on the data to verify that the instructions are complete and the results match what is reported in the paper. They scrutinize the code to the level they'd apply to a methodology section. Who pays for it: the readers of the published paper, since they benefit from not having to do the peer review themselves when they just want to use the results.
Maybe that's unrealistic and the review process is
> already as rigorous as it can reasonably be, given the reality of human frailty and limited resources
>Who pays for it: the readers of the published paper, since they benefit from not having to do the peer review themselves when they just want to use the results.
I can't make sense of this. Are you suggesting that journals should pay reviewers and finance this by charging people (more) to read articles?
As for reviewers checking statistical analyses, remember that this is peer review. The reviewers, on average, are going to be just as sloppy and ignorant and careless as the authors. If a field is awash with papers with bad stats, then most reviewers (being drawn from same same pool of people) will not be competent to check a paper's stats. Andrew Gelman isn't available to review every social science paper.
Help me understand how you can coherently want scientific research to be free, rather than locked up in paid journals, and at the same time believe that unpaid peer reviewers should respond to every submission with a kind of counter- research project to rebut the findings of those submissions?
Further: would you rather have peer reviewers be established experts in their fields, or new grad students? If the former, how does reconstituting peer review to trade expectations of good faith for diligent, expensive, tedious review impact who will end up willing to do peer review?
> Help me understand how you can coherently want scientific research to be free, rather than locked up in paid journals, and at the same time believe that unpaid peer reviewers should respond to every submission with a kind of counter- research project to rebut the findings of those submissions?
Because of that incoherence, it's not what I want. I support paying reviewers for the important work they do. If the research is of a kind that should be available to the public for free, then that's a job for research-funding organizations, who could explicitly pay for that cost (rather than implicitly by granting funds to scientists who then end up working as reviewers for free).
Indeed there is nothing wrong with any concept. If they wanted to show some systemic bias they should also publish a study about how to train women like dogs to attack men.
> Reject studies that have implausible findings?
Yes (if by implausible you mean unconvincing). Reject and resubmit with more powerful statistics.
> or publish it and let people follow up and do their own studies
Indeed , like good software, you publish a barely functioning version and let others find the bugs /s
> authors are welcome to suggest anything that like as a result of their research
Not really, editors will often tone you down. Otherwise we might do away with peer and editorial review.
I think you are advocating for 'anything goes' publishing here. Sure but then these journals should not call themselves scientific, it's a travesty to co-opt that name to validate their existence. Which in the end proves the point the author makes.
The author clearly has an agenda here, that's why they re publishing a blog, not a "science journal"
>Yes (if by implausible you mean unconvincing). Reject and resubmit with more powerful statistics.
I should clarify here- I mean implausible by the standard of common sense, but backed up by the statistics presented in the paper, which was the case here.
I think it's kind of shady that these three only published very selective quotes from their reviewers. Any skeptical or critical comments (even on the papers that were rejected) are omitted.
Given how long this article is, it doesn't seem likely they were omitted for length. They're trying to present a one-sided slant of the peer reviewers as overly credulous.
It's not just that they were published with "implausible "results. It's three things:
1. That they were published with obviously impossible results (inspecting the genitals of 10,000 dogs while asking owners about their sexuality).
2. It's where they were published. e.g. In the #1 "best" feminist journal Hypatia.
3. It's the evil in the messages they got published. They got a 3000-word except of Mein Kampf published, by rewording it in the language of intersectional theory. They got published recommending that "privileged" students be physically chained and silenced during class. They got published by proposing men be trained like animals. (This is not the same as learning about human behavior by studying dogs; it is to suggest that it's okay to treat people like dogs if they're born a certain way).
All this demonstrates a field of study that is not just soft in terms of intellectual rigor ("implausible"), but is willing to casually support reprehensible policy ideas if they're framed within a specific neo-Marxist belief system and targeted at men or white people.
Bear in mind that many of the people producing the papers in these fields receive public funds at public institutions. They project an image of being neutral scholars to justify those funds. What this hoax has done is given evidence that they are not neutral scholars. They are moralizers, activists, revolutionaries working to target a specific group of people, and are not very concerned about the rigor of the intellectual pathways they use to get there.
A group of people dedicated to a process are called scientists. A group of people dedicated to a conclusion is called a religion or an ideology. Social scientists claim to be the former. But the hoax papers demonstrated that, at least with some regularity and even in the highest journals, the social sciences act like the latter.
> (inspecting the genitals of 10,000 dogs while asking owners about their sexuality).
Why is that 'obviously impossible'? How long does it take to look at a dog's genitals, 5 minutes?
I haven't read the paper, but you could get a team of 10 undergrad (or grad) students, and send them out to different dog parks in the area. If each student inspects 20 dogs in a day (say they go to the park for a few hours--I assume many owners will say no) then you've got your data after 50 days of work.
Say your students are only free on weekends (when dog parks are likely more busy, anyway) then it'll take you a little more than six months to gather all your doggie-dick-data.
That's not at all an unreasonable length to run a study.
The paper does say this: "While I closely and respectfully examined the genitals of slightly fewer than ten thousand dogs, being careful not to cause alarm and moving away if any dog appeared uncomfortable...". The term "I" implies it was a single person doing all that. Elsewhere, it specifies that the total time spent observing dogs was 1000 hours.
Elsewhere: "From 10 June 2016, to 10 June 2017, I stationed myself on benches that were in central observational locations at three dog parks in Southeast Portland, Oregon. Observation sessions varied widely according to the day of the week and time of day. These, however, lasted a minimum of two and no more than 7 h and concluded by 7:30 pm (due to visibility)." It'd be an average of nearly 3 hours per day, every day.
Ok, so it's somewhat implausible for one person to look at this many dog-genitals, but it's certainly physically possible. How many animals does a veterinary see in a year?
Because you can't get 10 undergrad students to each put in 50 full days of work. Just can't be done.
I do hope you see that such a thing would not be a reasonable demand on these people. That's WAY too much. That's literally 1/5th of their entire school year.
Why not? When I review papers I am not trying to catch malicious authors. "This seems like an incredibly time consuming dataset to gather" is not a reasonable reason to reject. What is an honest author supposed to do in that situation? Collect less data?
The Hypatia article that got accepted was notoriously mild in comparison to many of the others, if you look at the summary:
> Thesis: That academic hoaxes or other forms of satirical or ironic critique of social justice scholarship are unethical, characterized by ignorance and rooted in a desire to preserve privilege.
This is obviously something that can be argued, even taking this very same project as an example! But for the authors to claim that the position is absurd begs the question in favor of their project. That looks disingenuous to me.
My post was a bit flippant, but it does capture my strong feelings on this matter.
Not that is matters now my post has been downvoted to oblivion, but I do think we need a complete clear out of the Arts facilities and to start again from scratch. The arts are far too important to be left in the hands of the people who have passed through an arts degree of the last 40 years.
I remember at least two similar scandals in very much non-social-justice-related branches of Academia. One was in postmodern studies, if I recall correctly, the other one I think in epistemology. In one case, the paper had been written to deliberately mean nothing, in the other one, I seem to remember that the papers had been generated with Markov chains.
So, based on the same data, my personal conclusion is that some fields of Academia are vulnerable to bullshit, whether it's sophistry or simply using the right lingo. It is a pretty bad sign, and I suspect that it is correlated with fields in which results are hard to check/reproduce.
It may also be correlated with ideology (I remember that this was the accusation towards postmodern studies), which doesn't mean that it is correlated with any specific ideology.
In other words: promising research, but the starting hypothesis and its possible limitations need to be expressed more clearly, and much more data needed.