(1)
In recent decade, many open-access journals have adopted a business model to tap the human resource of scientific community. They recruit researchers to be editors of special issues and reviewers, for free. The manuscripts are published online only without hard-copy printing. Authors are charged with expensive publication fee. This is a no-cost money-printing machine that can be run by one person. From the commercial perspective, this model is super successful.
Such strategy is gradually adopted by even prestigious journals, though in a limited manner. For example, ten years ago, the art editors of journals will create illustration for the accepted manuscripts. Now everyone is asked to do it by yourself, including journal cover image- and the authors should consider this is an honor. This promoted the rise of biorender (which I appreciate the function and application very, very much). However, now the author needs to spend time and cost to create illustration.
Researchers who want to publish may wonder: if we have to do all the services by ourselves, why should we pay to the journals? We can just create our own journals to publish our own studies. It's not rocket science. In fact, that's what research societies did before the rise of for-profit scientific publication. With the internet and modern publication tools, it's time we scientific researchers go back to that tradition.
(2)
In biomedical research, why do we prefer to cite papers published on prestigious journals? Because we believe that they pass two measurements: (1) research quality, including novel data type generated from technological advances; and (2) biomedical significance; for example, the potential to generate therapeutic targets or the essential factor of disease progression, etc. The results of these studies can guide new research toward better understanding of biomedical systems and effective approaches to solve the problems in human health. Journal editors usually refer the threshold of such measurements as "general interest".
As 90% of the manuscripts receive desk rejection on submission, it is journal editors to select the studies and perform their measurements for us. We trust their justification and use the measurements to evaluate the performance of our and other's research.
Can we trust research community to perform the same measurements for our studies? After all, we are those who really do the studies, don't you think so? There are so many preprints in the Clouds. Why can't we evaluate them by ourselves and report them to community? However, if we could not find reviewers for the reprints of our selection, how can we perform the measurement? Here are some ideas:
1. Find reprints whose results can support each other. Group them together and explain the data overall can prove reproducibility, and therefore quality of the studies.
2. Find reprints whose studies are performed in models of different species, and generate similar conclusion. Biological factors conserved evolutionarily must be important and therefore significant.
3. Track the number and research field of studies citing the reprints since the time of posting, proving if they are general interest.
(3)
At this moment, peer review process is still a well-accepted mechanism to evaluate the quality and significance of a research paper. Let's not argue if it is an effective mechanism, and ask this question: is there any alternative approach for evaluating preprints?
I actually think so.
A study is cited in a paper for various types of the author's purposes. If cited in Introduction, usually the study helps to build the case for the author's hypothesis. If cited in Methods, the technology of the study is likely adopted by the author. If cited in Results, the study likely helps to interpret the output data. If cited in Discussion, the study helps to validate the author's conclusion.
By analyzing the distribution of the citation among sections of papers that cite the preprint, we may be able to evaluate the significance of its results. Let's say that a preprint is always cited in Methods section, it might be an evidence its technology is used repetitively, so it should be important. If it is cited only in Introduction or Review article, we can say it is not scrutinized.
If a study shows this is indeed the case, a "section citation index" can be developed to give to preprints and published papers, and they can be compared.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in