What’s the best (or worst) review you’ve received?
It’s Peer Review Week and we’d like to hear your thoughts
Published in
Ecology & Evolution

Can you believe this is the fifth year of Peer Review Week?!
There’s a great post about the history of PRW on the Scholarly Kitchen if you’re keen to learn more about how it started and what it’s grown into. Most years have had a theme, from transparency to diversity, and the topic chosen for 2019 is…
‘Quality in Peer Review’
So we’re asking:
What’s the best (or worst) review you’ve ever received? And why
We’d love to hear from you so please do take the opportunity to share your thoughts and experiences on Peer Review! Simply use the comment box at the bottom of this post to share your answers.
One thing – as with any contribution to the Community, do keep in mind our Community Policy when sharing your comments (basically, be nice and stay on topic).
Got a question? Get in touch with us here.
Follow the Topic
Ecology
Life Sciences > Biological Sciences > Ecology
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in
I find this call to be an excellent idea, because this topic is in urgent need of being discussed. There are a number of peer review panels out there, but a wider, open and transparent forum should take place.
I’ll put it strongly: I think the peer review system is broken. Or, perhaps more accurately, that its organization and evaluation are primitive and, hence, dysfunctional. More and more commonly do I collect my manuscript after one or more rounds of reviews just to resubmit it elsewhere, with barely any change but the formatting for the new journal.
Don’t get me wrong: I am not about to complain about the harshness of reviews, nor am I that stubborn scientist crusading against the world. Well, perhaps a little bit, to the extent that we all have to when trying to put forward new ideas. But what I am denouncing here is much more serious. It is the abundance of “reviews” that entirely miss the task they are meant to fulfill. And way too often this is left unchecked – but we’ll go back to that.
A reviewer can be more or less a promoter of their own views, and their review can be more or less emotional, but, as a basis, we expect that, in its main lines, the review will:
- Discuss the evidence at hand and its analysis;
- Try to contribute to the betterment of the work by providing constructive content – unless the study is flawed or notoriously below quality standards;
- Consider the value of the evidence presented for the community, regardless of interpretational differences.
This is how the peer review system could be really powerful and efficient, by contributing more minds to the problem at hand, and in so doing, widening perspectives and providing avenues of investigation that the original authors may have missed.
Instead, about two thirds of the reviews I receive either (or often combinations of the following):
- Are openly hostile, generally implying that everything written is wrong because the authors should not be allowed to publish, regardless of what is actually said;
- Try to unsubtly block the publication by asking for absurd complementary analyses or by simply saying that the interpretations are not justified or incorrect, while, in fact, they clearly are to all parties. Working in palaeontology, I have had reviewers recommending rejection based on a lack of a detailed description of every specimen documented and justification that they belonged to the correct morphotype (which, of course, is not how we proceed) or hypocritically claiming that specimens were wrongly assigned to a given morphotype – when they actually were;
- Attack alleged weaknesses of the manuscript during a first round of review and, when proven to be incorrect by the authors, ignore these points and invoke other supposed caveats during subsequent rounds, up to, when running short of arguments, the originality of the study;
- Discuss at length the opinion of the reviewer on the topic, and how the incomplete study should be rejected, while that reviewer seemingly has not even read the manuscript, for most of the points were already evocated in the submitted text;
- Initiate debates irrelevant to the objectives of the paper and the evidence presented, and present these debates as major arguments for rejection;
- Provide a two-sentence review stating that they “do not believe” in the conclusions of the paper;
- Reject the validity of the Linnaean nomenclatural system and the concept of macroevolution.
The consequence of this is not only to hamper the advancement of science, but also to over-protect paradigms defended by colleagues with influence and their followers, and, after a while, to lay the ground for dogmas that are more comfortable than really sound and solid.
It is more than obvious to me that a system of evaluation of reviews is lacking. The idea of Peerage of Science is very interesting in that regard, as they propose to cope with this problem by having the reviews themselves evaluated through peer-review. However, (1) this could become time consuming, (2) reviewers should be registered with Peerage of Science to participate, (3) a wide range of journals should abide by this system to make it worth, and (4) such fundamental process should not be left to a single company, which has journals pay to be able to browse through reviewed manuscripts (at least with options).
Perhaps easier would be to provide editors with a sharable database in which reviews could be secondarily evaluated, for instance by the editors themselves, or at least those of the same subject area. Although extraneous to the editorial community, I do not doubt that editors already exchange about the suitability of different reviewers, but, given the above, something much more systematic and objective is needed.
And this makes me reach my next, and perhaps most important point, that the responsibility does not stop at the circle of authors and reviewers. Editors have a large and direct involvement in this issue. This starts of course with the apparent unwillingness of many editors to judge past blatantly inadequate reviews, or at least the reluctancy to put them into question, as long as they are disguised as professional reviews or that they come from alleged experts. It seems impossible that this stems from an unawareness of editors about what reviews are adequate, and I suspect that timeliness and other journal constraints come into play as well. But the problem also goes much further than that. For instance, cases such as:
- Editors replying that “although they think the authors could provide counter-arguments to the reviewers, they decide not to send the manuscript for further review because they think the reviewers will never concede ground.” If the editors implicitly acknowledge that there are arguments in favour of the authors’ work, they should assume their role as additional party and decide whether the position of the reviewers warrant rejection (because revisions are another matter entirely). Or call for additional opinions. Otherwise, this is simply stating that new and controversial ideas are fated to never be published in scientific journals, regardless of their strengths. That is, killing science;
- Editors having preferential relationships with some authors, either as such or as reviewers. While this needs not be, this can certainly lead to conflicts of interest. Authors that do not have the favours of these other researchers close to the editors of a given journal are in a way cut off from that journal, and know that there is no point submitting there. That may be common, disavowed knowledge, but as an early career scientist, I find it unacceptable, especially when some colleagues occupy several of such editorial positions and can block access to a wide range of journals;
- Editors that tend to consider status before content, and would give more credit to reviewers because their position / institution would be higher or more “prestigious”. This seems perhaps too simple-minded or too openly unethical to be true, but such a bias can express itself at different levels, and can be more or less conscious.
On the other side of the bench, as reviewer, the behaviour of certain editors can be also disconcerting. This happened to me for instance when an editor decided to tell the author in his letter to ignore my review, because this review dared to criticize some of the work the editor had been involved with. Such partiality has very serious consequences for the representativity of visible research and the health of a given field as a whole.
While there are measures to establish in the long term, editors have the power and ability to make improvements in the short term. Asking for more fairness in review through appeals is not really a solution, because authors could also indiscriminately appeal even when a rejection is reasonable. Asking for additional opinions when original reviews are questionable is something that some editors are already doing, and I have also seen editors taking the initiative of having the reviewers assessing each other’s reviews. But this also implies that the editor, following this, has to be able to make a judgment on the quality of the final reports, and not simply stand back and take a decision based on the average outcome.
Of course, the perspective on peer review will depend on the field, the thematic, the research group, and the individual researcher involved. But this is also part of the problem: Complete impartiality in peer review is impossible (and perhaps conflicting with the idea of peer review itself), but we should strive for more equality in fairness and objectivity across publications.
C.A.
Thanks so much for your detailed comments, Cedric. Objectivity from all sides of peer review seems to be a bugbear for many... May I ask your thoughts and/experiences on open peer review?
Most people with some publication experience go through their share of infuriating or totally disconcerting reviews, but, as you can see, they’re not waiting in line to protest. Most just live with it. Some even enjoy the game and the dirty tricks. If you’re in a comfortable situation with a job waiting for you or if you already have a position, the occasional wrecks sent to you as reviews and the detachment of editors won’t be the end of the world. If you have built your circle of influence, you can even expect some reviewers to tilt the balance by actively supporting you (which is the reverse bias and makes you wonder if, indeed, publishing is not just a bad political game). But if you’re “on the market” as they say (marketable commodities that we are), and if these reviews, written by people with animosities or thinking that rejecting a paper is inconsequential and almost entertaining, start to accumulate, you won’t publish any more and your career will be seriously threatened. Of course, some of these reviewers do hope for you to vanish off radar, and they write these reviews with that purpose; but let’s just keep some optimism and say they are rare. That’s the real world all researchers know about, and yet, when looking at the media, and like many other things, it somehow disappears behind a smooth screen of quasi-dystopian merriment.
I don’t have much experience with open peer-review but I think it isn’t a trivial matter. Having reviewers reveal their names could help mitigate the attitude of some of them, and forces a bit more quality overall in the delivery. But this can also be seriously detrimental to objectivity, because the peer-review system and the entire community would become even more personal than they are already. In such a context, friends and foes and collusions can quickly become everything that matters. This would be especially true for smaller research communities. And so making reviews completely transparent may not be the solution we need. Editors, however, have always had access to the reviewers’ names, and yet it seems they have not used that information to weed out researchers lacking the competence to provide appropriate reviews. I’d therefore put more weight on the responsibility of editors than on open peer-review.
As I commented briefly in a blog of the nature ecology evolution research community (March 26, 2019), our recent paper in Nature Ecology and Evolution (Zhang et al, 2019) and the other one in Nature Ecology & Evolution (Vankuren and Long, 2018) received excellent reviews, definitely among the top 5 papers that received the best reviews in my career (the other three including Zhang et al (2011, PLoS Biology), Zhang et al (2004. PNAS) , Long and Langley (1993, Science). I use the words “excellent” and “best”, not because those reviewers endorsed or editors accepted our papers but their high standard of scientific researches, professional levels in the related fields and publication, and their roles in improvement of these papers in science and presentation. Some of these reviewers were so generous and even participated in further analyses of the data we presented in those manuscripts, generated significant results and encouraged us to use in the manuscripts.
For example, our 2018 paper in Nature Ecology & Evolution is such an example, in which a reviewer pointed out an additional candidate gene duplication for resolving sexual conflict, promoting us to do further analyses in other Drosophila lineages. We found additional two more candidates. We are grateful to this reviewer but regretted we even could not mention his or her name in the acknowledgement because, for an understandable reason, the editorial rules of Nature Ecology and Evolution did not share with us the names of reviewers and did not allow us to say a Thank You to the reviewer in the end of the paper. I wish these anonymous reviewers would know our gratitude to their hard work and generosity with their critical comments, which contribute to a beauty of intellectual life in the world of scientists.
Maybe they've read your comment, Manyuan :)
BUTTERFLY PREFERENCE TESTING TECHNIQUE ENRAGES REVIEWERS
You might think it would be hard to stimulate reviewers to enraged eloquence by trying to publish a slightly novel technique for testing the oviposition preferences of butterflies. But you'd be wrong! Back in 1981, I was a displaced Yorkshireman struggling to publish for my tenure in the Zoology Dept at the University of Texas at Austin. I wanted to submit to Evolution a MS showing that the diet breadth of a butterfly population arose from three mechanisms: (1) variation of preference rank: different individual females preferred to oviposit on different host genera; (2) weakness of preference: some individual females were generalists; (3) rarity of a preferred host, which caused females preferring it to choose a more abundant less-preferred genus, on which I found them naturally laying eggs.
I reasoned that I should begin by publishing my preference-testing technique, which was simple: to stage encounters with two hosts in alternation and record acceptances and rejections without allowing oviposition. By this means I could estimate the length of time that the butterfly would search in the motivational state in which it would accept only its preferred host, before reaching the level of motivation at which either host would be accepted, whichever were next encountered. This length of time turns out to be a heritable trait, unaffected by learning.
I did not send the MS to Nature, I sent it to Ecological Entomology. The editor, John Lawton, replied:
"Dear Mike, I sent your MS to three referees in the hopes of finding someone who might like it a little. Sadly, i failed. Clearly, you will have to think again. I would NOT be prepared to look at a revise. Yours, John. "
He sent this memorable review:
"The interesting, if prolix, titles and abstracts led me into the introduction with great expectation. There, my interest was mired in the third sentence and NEVER extracted. The overlapping and unclear denotations of "preference," "specificity" and "choice" make the MS extremely difficult by the first page. These and ordinary problems of syntax make it impossible by the second. At the risk of being wrong, it does appear that the subject is interesting and that the MS could be rewritten and rendered reviewable. As it stands it is not."
Well, on reflection this doesn't quite meet my initial assertion that reviewers were "enraged." So, how about this one, with respect to the same technique:
"The business of motivation involved in the so-called preference test is incompetent, irrelevant, immaterial and without any foundation whatsoever in the established literature."
I called the then editor of Evolution, Doug Futuyma, to ask whether he would consider my MS reporting the results of preference-testing, even though I could not get the technique published. He said NO. Quite right, too, Doug.
My former PhD supervisor, Paul Ehrlich, called me, worrying kindly about my tenure. I explained my problem. He said: "Don't you need to get this published if you're to get tenure?" "Er, ...yes." "OK, don't worry, I'm an editor of Oecologia, you can publish it there, it's better than Ecol Ent anyhow. But, of course, I shouldn't handle it personally, just send it to Charlie Krebs and tell him I suggested it." So I did, and Charlie sent it along to Paul with a negative review that I never saw, and a letter telling Paul to do his own dirty work if he wanted this nonsense to see the light of day. However, it WAS published in Oecologia, by means that I should not admit and that would be seriously embarrassing were I not proud of my technique, which continues to generate publications, including a cover paper in Nature last year.
Back to 1982. Now that my technique was published, I submitted my MS to Evolution. It was resoundingly rejected. My chance of tenure was fading like the photos in the "Back to the Future" movie when the past was tampered with. I called the editor:
"Doug, I'm feeling much misunderstood down here in Texas."
"Why's that, Mike"
"You rejected my MS. I felt that I posed an important novel question, tackled it experimentally and got some answers."
"Frankly, Mike, I read your MS and I didn't think you'd shown anything at all."
"Huh? Why not?"
"Suppose you did a t-test on frequencies of acceptance and rejection and got significance, why then I'd see that you'd shown variation of preference."
"That wouldn't show anything at all."
"Huh? Why not?"
So I explained why not, he published the paper, I got my tenure by the skin of my teeth, and I'm STILL preference-testing the same species of butterfly with exactly the same technique. THANKS for listening, Doug! Many editors wouldn't take the trouble.
I do think that preference tests cannot have a universal design, even for butterflies. They should be devised around the behaviour of each study organism. I never implied that anyone else should use my technique and to my knowledge only one paper using it has been published by anyone but me. However, if folk perceived an unintended implication that they should be using this relatively labour-intensive technique, perhaps that would explain their reaction against it. I sent a MS using the technique to my friend Liz Bernays, who said:
"I wouldn't believe you could do what you SAY you can do.....(long pause)... except that it produces such sensible results" which I took to mean "You expect me to believe this?"
I submitted a MS using the technique to Ecology, co-authored with two Finns, including a high-profile Finn, Ilkka Hanski. We got a review that said:
"This preference-testing technique used to be considered highly suspect, but I suppose that by now it has been used by several people."
Ilkka said" Mike, I begin to see the problems you have had:" and I replied
"Ilkka, what this means is that, if your name were not on this, it would have been rejected."
But I survive. A further, unexpected, "review" of my preference-testing technique came in 1988 in the dissertation defence of my grad student, Chris Thomas:
"When Mike Singer was twelve, he discovered ho much nicer it is than sticking pins in butterflies to HELP them to find places to lay their eggs. Unfortunately, he's been coasting ever since."