Intro Content
Nature
Reconstructing the origin of animals' greatest success story
By documenting the origin of the Mandibulata megaclade, a new Cambrian arthropod from Burgess Shale's newest locality eats away an emerging paradigm about early arthropod evolution
Popular Content
The fuxianhuiid story, or the inner workings of a paradigmatic science
Did the peer-review process become a fortress of favoured paradigms?
Recent Comments
I find this call to be an excellent idea, because this topic is in urgent need of being discussed. There are a number of peer review panels out there, but a wider, open and transparent forum should take place.
I’ll put it strongly: I think the peer review system is broken. Or, perhaps more accurately, that its organization and evaluation are primitive and, hence, dysfunctional. More and more commonly do I collect my manuscript after one or more rounds of reviews just to resubmit it elsewhere, with barely any change but the formatting for the new journal.
Don’t get me wrong: I am not about to complain about the harshness of reviews, nor am I that stubborn scientist crusading against the world. Well, perhaps a little bit, to the extent that we all have to when trying to put forward new ideas. But what I am denouncing here is much more serious. It is the abundance of “reviews” that entirely miss the task they are meant to fulfill. And way too often this is left unchecked – but we’ll go back to that.
A reviewer can be more or less a promoter of their own views, and their review can be more or less emotional, but, as a basis, we expect that, in its main lines, the review will:
- Discuss the evidence at hand and its analysis;
- Try to contribute to the betterment of the work by providing constructive content – unless the study is flawed or notoriously below quality standards;
- Consider the value of the evidence presented for the community, regardless of interpretational differences.
This is how the peer review system could be really powerful and efficient, by contributing more minds to the problem at hand, and in so doing, widening perspectives and providing avenues of investigation that the original authors may have missed.
Instead, about two thirds of the reviews I receive either (or often combinations of the following):
- Are openly hostile, generally implying that everything written is wrong because the authors should not be allowed to publish, regardless of what is actually said;
- Try to unsubtly block the publication by asking for absurd complementary analyses or by simply saying that the interpretations are not justified or incorrect, while, in fact, they clearly are to all parties. Working in palaeontology, I have had reviewers recommending rejection based on a lack of a detailed description of every specimen documented and justification that they belonged to the correct morphotype (which, of course, is not how we proceed) or hypocritically claiming that specimens were wrongly assigned to a given morphotype – when they actually were;
- Attack alleged weaknesses of the manuscript during a first round of review and, when proven to be incorrect by the authors, ignore these points and invoke other supposed caveats during subsequent rounds, up to, when running short of arguments, the originality of the study;
- Discuss at length the opinion of the reviewer on the topic, and how the incomplete study should be rejected, while that reviewer seemingly has not even read the manuscript, for most of the points were already evocated in the submitted text;
- Initiate debates irrelevant to the objectives of the paper and the evidence presented, and present these debates as major arguments for rejection;
- Provide a two-sentence review stating that they “do not believe” in the conclusions of the paper;
- Reject the validity of the Linnaean nomenclatural system and the concept of macroevolution.
The consequence of this is not only to hamper the advancement of science, but also to over-protect paradigms defended by colleagues with influence and their followers, and, after a while, to lay the ground for dogmas that are more comfortable than really sound and solid.
It is more than obvious to me that a system of evaluation of reviews is lacking. The idea of Peerage of Science is very interesting in that regard, as they propose to cope with this problem by having the reviews themselves evaluated through peer-review. However, (1) this could become time consuming, (2) reviewers should be registered with Peerage of Science to participate, (3) a wide range of journals should abide by this system to make it worth, and (4) such fundamental process should not be left to a single company, which has journals pay to be able to browse through reviewed manuscripts (at least with options).
Perhaps easier would be to provide editors with a sharable database in which reviews could be secondarily evaluated, for instance by the editors themselves, or at least those of the same subject area. Although extraneous to the editorial community, I do not doubt that editors already exchange about the suitability of different reviewers, but, given the above, something much more systematic and objective is needed.
And this makes me reach my next, and perhaps most important point, that the responsibility does not stop at the circle of authors and reviewers. Editors have a large and direct involvement in this issue. This starts of course with the apparent unwillingness of many editors to judge past blatantly inadequate reviews, or at least the reluctancy to put them into question, as long as they are disguised as professional reviews or that they come from alleged experts. It seems impossible that this stems from an unawareness of editors about what reviews are adequate, and I suspect that timeliness and other journal constraints come into play as well. But the problem also goes much further than that. For instance, cases such as:
- Editors replying that “although they think the authors could provide counter-arguments to the reviewers, they decide not to send the manuscript for further review because they think the reviewers will never concede ground.” If the editors implicitly acknowledge that there are arguments in favour of the authors’ work, they should assume their role as additional party and decide whether the position of the reviewers warrant rejection (because revisions are another matter entirely). Or call for additional opinions. Otherwise, this is simply stating that new and controversial ideas are fated to never be published in scientific journals, regardless of their strengths. That is, killing science;
- Editors having preferential relationships with some authors, either as such or as reviewers. While this needs not be, this can certainly lead to conflicts of interest. Authors that do not have the favours of these other researchers close to the editors of a given journal are in a way cut off from that journal, and know that there is no point submitting there. That may be common, disavowed knowledge, but as an early career scientist, I find it unacceptable, especially when some colleagues occupy several of such editorial positions and can block access to a wide range of journals;
- Editors that tend to consider status before content, and would give more credit to reviewers because their position / institution would be higher or more “prestigious”. This seems perhaps too simple-minded or too openly unethical to be true, but such a bias can express itself at different levels, and can be more or less conscious.
On the other side of the bench, as reviewer, the behaviour of certain editors can be also disconcerting. This happened to me for instance when an editor decided to tell the author in his letter to ignore my review, because this review dared to criticize some of the work the editor had been involved with. Such partiality has very serious consequences for the representativity of visible research and the health of a given field as a whole.
While there are measures to establish in the long term, editors have the power and ability to make improvements in the short term. Asking for more fairness in review through appeals is not really a solution, because authors could also indiscriminately appeal even when a rejection is reasonable. Asking for additional opinions when original reviews are questionable is something that some editors are already doing, and I have also seen editors taking the initiative of having the reviewers assessing each other’s reviews. But this also implies that the editor, following this, has to be able to make a judgment on the quality of the final reports, and not simply stand back and take a decision based on the average outcome.
Of course, the perspective on peer review will depend on the field, the thematic, the research group, and the individual researcher involved. But this is also part of the problem: Complete impartiality in peer review is impossible (and perhaps conflicting with the idea of peer review itself), but we should strive for more equality in fairness and objectivity across publications.
C.A.
Most people with some publication experience go through their share of infuriating or totally disconcerting reviews, but, as you can see, they’re not waiting in line to protest. Most just live with it. Some even enjoy the game and the dirty tricks. If you’re in a comfortable situation with a job waiting for you or if you already have a position, the occasional wrecks sent to you as reviews and the detachment of editors won’t be the end of the world. If you have built your circle of influence, you can even expect some reviewers to tilt the balance by actively supporting you (which is the reverse bias and makes you wonder if, indeed, publishing is not just a bad political game). But if you’re “on the market” as they say (marketable commodities that we are), and if these reviews, written by people with animosities or thinking that rejecting a paper is inconsequential and almost entertaining, start to accumulate, you won’t publish any more and your career will be seriously threatened. Of course, some of these reviewers do hope for you to vanish off radar, and they write these reviews with that purpose; but let’s just keep some optimism and say they are rare. That’s the real world all researchers know about, and yet, when looking at the media, and like many other things, it somehow disappears behind a smooth screen of quasi-dystopian merriment.
I don’t have much experience with open peer-review but I think it isn’t a trivial matter. Having reviewers reveal their names could help mitigate the attitude of some of them, and forces a bit more quality overall in the delivery. But this can also be seriously detrimental to objectivity, because the peer-review system and the entire community would become even more personal than they are already. In such a context, friends and foes and collusions can quickly become everything that matters. This would be especially true for smaller research communities. And so making reviews completely transparent may not be the solution we need. Editors, however, have always had access to the reviewers’ names, and yet it seems they have not used that information to weed out researchers lacking the competence to provide appropriate reviews. I’d therefore put more weight on the responsibility of editors than on open peer-review.
I find this call to be an excellent idea, because this topic is in urgent need of being discussed. There are a number of peer review panels out there, but a wider, open and transparent forum should take place.
I’ll put it strongly: I think the peer review system is broken. Or, perhaps more accurately, that its organization and evaluation are primitive and, hence, dysfunctional. More and more commonly do I collect my manuscript after one or more rounds of reviews just to resubmit it elsewhere, with barely any change but the formatting for the new journal.
Don’t get me wrong: I am not about to complain about the harshness of reviews, nor am I that stubborn scientist crusading against the world. Well, perhaps a little bit, to the extent that we all have to when trying to put forward new ideas. But what I am denouncing here is much more serious. It is the abundance of “reviews” that entirely miss the task they are meant to fulfill. And way too often this is left unchecked – but we’ll go back to that.
A reviewer can be more or less a promoter of their own views, and their review can be more or less emotional, but, as a basis, we expect that, in its main lines, the review will:
- Discuss the evidence at hand and its analysis;
- Try to contribute to the betterment of the work by providing constructive content – unless the study is flawed or notoriously below quality standards;
- Consider the value of the evidence presented for the community, regardless of interpretational differences.
This is how the peer review system could be really powerful and efficient, by contributing more minds to the problem at hand, and in so doing, widening perspectives and providing avenues of investigation that the original authors may have missed.
Instead, about two thirds of the reviews I receive either (or often combinations of the following):
- Are openly hostile, generally implying that everything written is wrong because the authors should not be allowed to publish, regardless of what is actually said;
- Try to unsubtly block the publication by asking for absurd complementary analyses or by simply saying that the interpretations are not justified or incorrect, while, in fact, they clearly are to all parties. Working in palaeontology, I have had reviewers recommending rejection based on a lack of a detailed description of every specimen documented and justification that they belonged to the correct morphotype (which, of course, is not how we proceed) or hypocritically claiming that specimens were wrongly assigned to a given morphotype – when they actually were;
- Attack alleged weaknesses of the manuscript during a first round of review and, when proven to be incorrect by the authors, ignore these points and invoke other supposed caveats during subsequent rounds, up to, when running short of arguments, the originality of the study;
- Discuss at length the opinion of the reviewer on the topic, and how the incomplete study should be rejected, while that reviewer seemingly has not even read the manuscript, for most of the points were already evocated in the submitted text;
- Initiate debates irrelevant to the objectives of the paper and the evidence presented, and present these debates as major arguments for rejection;
- Provide a two-sentence review stating that they “do not believe” in the conclusions of the paper;
- Reject the validity of the Linnaean nomenclatural system and the concept of macroevolution.
The consequence of this is not only to hamper the advancement of science, but also to over-protect paradigms defended by colleagues with influence and their followers, and, after a while, to lay the ground for dogmas that are more comfortable than really sound and solid.
It is more than obvious to me that a system of evaluation of reviews is lacking. The idea of Peerage of Science is very interesting in that regard, as they propose to cope with this problem by having the reviews themselves evaluated through peer-review. However, (1) this could become time consuming, (2) reviewers should be registered with Peerage of Science to participate, (3) a wide range of journals should abide by this system to make it worth, and (4) such fundamental process should not be left to a single company, which has journals pay to be able to browse through reviewed manuscripts (at least with options).
Perhaps easier would be to provide editors with a sharable database in which reviews could be secondarily evaluated, for instance by the editors themselves, or at least those of the same subject area. Although extraneous to the editorial community, I do not doubt that editors already exchange about the suitability of different reviewers, but, given the above, something much more systematic and objective is needed.
And this makes me reach my next, and perhaps most important point, that the responsibility does not stop at the circle of authors and reviewers. Editors have a large and direct involvement in this issue. This starts of course with the apparent unwillingness of many editors to judge past blatantly inadequate reviews, or at least the reluctancy to put them into question, as long as they are disguised as professional reviews or that they come from alleged experts. It seems impossible that this stems from an unawareness of editors about what reviews are adequate, and I suspect that timeliness and other journal constraints come into play as well. But the problem also goes much further than that. For instance, cases such as:
- Editors replying that “although they think the authors could provide counter-arguments to the reviewers, they decide not to send the manuscript for further review because they think the reviewers will never concede ground.” If the editors implicitly acknowledge that there are arguments in favour of the authors’ work, they should assume their role as additional party and decide whether the position of the reviewers warrant rejection (because revisions are another matter entirely). Or call for additional opinions. Otherwise, this is simply stating that new and controversial ideas are fated to never be published in scientific journals, regardless of their strengths. That is, killing science;
- Editors having preferential relationships with some authors, either as such or as reviewers. While this needs not be, this can certainly lead to conflicts of interest. Authors that do not have the favours of these other researchers close to the editors of a given journal are in a way cut off from that journal, and know that there is no point submitting there. That may be common, disavowed knowledge, but as an early career scientist, I find it unacceptable, especially when some colleagues occupy several of such editorial positions and can block access to a wide range of journals;
- Editors that tend to consider status before content, and would give more credit to reviewers because their position / institution would be higher or more “prestigious”. This seems perhaps too simple-minded or too openly unethical to be true, but such a bias can express itself at different levels, and can be more or less conscious.
On the other side of the bench, as reviewer, the behaviour of certain editors can be also disconcerting. This happened to me for instance when an editor decided to tell the author in his letter to ignore my review, because this review dared to criticize some of the work the editor had been involved with. Such partiality has very serious consequences for the representativity of visible research and the health of a given field as a whole.
While there are measures to establish in the long term, editors have the power and ability to make improvements in the short term. Asking for more fairness in review through appeals is not really a solution, because authors could also indiscriminately appeal even when a rejection is reasonable. Asking for additional opinions when original reviews are questionable is something that some editors are already doing, and I have also seen editors taking the initiative of having the reviewers assessing each other’s reviews. But this also implies that the editor, following this, has to be able to make a judgment on the quality of the final reports, and not simply stand back and take a decision based on the average outcome.
Of course, the perspective on peer review will depend on the field, the thematic, the research group, and the individual researcher involved. But this is also part of the problem: Complete impartiality in peer review is impossible (and perhaps conflicting with the idea of peer review itself), but we should strive for more equality in fairness and objectivity across publications.
C.A.