A substack post about peer review is getting a lot of attention, and I’m here to rant about it. Basically the post is calling out the peer review process as a terrible and broken system. And it is. But the author’s rhetoric about it is kind of problematic.
1. Peer review is not an experiment.
The author claims that it is, but contradicts himself straight away:
The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.
These are not just things that make an experiment bad, they are things that preclude peer review from being an experiment altogether. Experiments are run on samples, they are run with intent, they are performed in controlled environments. As someone who calls himself an experimental psychologist and who calls his blog “experimental history”, he really extends the term experiment in weird ways. This use of the term is like referring to the “experiment of democracy”, basically just grand rhetoric for “we’re figuring things out and learning as we go”.
The author also seems to think of the experiment of peer review as a pass/fail test, which again, is not what experiments are for. He sets bars for what successful scientific evaluation ought to look like, and measures his experiences of peer review against it. But this is not an experiment, this is qualitative assessment. There’s nothing wrong with that, but it’s troubling how the author wraps his proclamation that peer review is bad and should be abolished within some phony hearkening to science-core.
2. General discourse is not enough to validate truth statements.
Various parts of the post indicate that the author considers science to be the evaluation of statements of truth, which can only be verified by their fidelity to observed reality. Ok, fair enough. But he refers to Einstein’s large body of non-reviewed work as an argument for relying on discourse among educated fellows as an efficient way of evaluating the quality of scientific work. Despite Einstein’s apparent genius, which is cemented in popular imagination but who also happened to be wrong about some things, this is not reason enough to abolish peer review. Moreover, the author does not consider the general acceptance of non-reviewed ideas that happened to be wrong as a counter point that clearly refutes his main point.
3. What about non-experimental methods?
The author has a huge blind spot for non-experimental methods. He suggests that if the results of a scientific analysis can be replicated, then that is good enough for acceptance into an authoritative cannon of truth. Moreover, he indicates that work that can not replicate is “a whole lotta money for nothing”, basically a waste of time and resources. But a lot of science can not be replicated, by virtue of the fact that science doesn’t always follow experimental protocols that allow for replication tests to be performed. Fantastic and valuable work that relies on non-experimental heuristics, including a lot of work in the social sciences and humanities, climate science, ecology, astronomy and various other fields, are left in the lurch. His take on non-replicability in these disciplines reads a lot like the unethical and ironically non-replicable Sokal hoaxes that serve as the basis for unhinged right-wing attacks on the social sciences and humanities.
This also contradicts the author’s hearkening to discourse among learned men of olde as a way of dealing with problems relating to peer review. Opening up the comments section, even if just limited to a curated list of credentialed scholars, is not the same as conducting independent replication studies. I think the reasoning behind this link is that if other people have experienced similar phenomena in their own labs, then it’s more likely to be accepted as true. But this is not the same as replication under the same conditions, it is just the same uncontrolled consensus-based evaluation criteria as peer review but with an open filter.
4. Peer-review in context
I agree with many of the things that the author is saying. Yes, there are many ways in which peer review is broken and could be improved. For instance, I agree with the notion that peer reviewers do not dive deep enough into the data and aren’t always critical enough. But I think that this is because most people are unprepared to do so, either because they do not have access to data or do not know how to work with statistics or read code. Moreover, certain journals like PNAS give preferential treatment to certain authors over others, and there are definitely major issues with racism and sexism in the evaluation process. Open peer review does not resolve these issues, namely because it treats peer review in isolation.
The only way to make peer review better is by instilling good scholarly practices in the next generation of scholars. However, this is inhibited by structural issues, such as the tight job market that favours quantity of peer reviewed articles over any other factor, and the general prestige economy of academia. These are the root issues. The foul state of peer review is one aspect of this mess, alongside structural racism, sexism and transphobia, the sheer expense of obtaining an advanced degree and excelling in the years immediately post-PhD, and the pressures to conform trends that get you funding. You can not separate the problems with peer review from these issues. Yet somehow the author manages to completely side step these concerns, identifying the broken peer review system as a purely epistemic problem, rather than a problem with tangible and far-reaching social implications.