Recently, I used AI to review a manuscript.
Editors and authors should not worry: I still read and reviewed the entire manuscript myself from beginning to end. I used this as an opportunity to test how far AI has progressed in evaluating scientific studies. I asked it three questions:
-
Are there any logical inconsistencies or divergences among the results shown across the figures?
-
Are the data in each panel of each figure inconsistent with results from previous mouse studies?
-
Compared with previous mouse studies, what are the genuinely novel findings in this manuscript, and what new hypotheses emerge from that novelty?
My conclusion was that it's a promising approach. AI was able to catch many details, which suggests that it already has value as a screening tool. In some extent, it can even identify the most important issue in the manuscript. At this moment, AI cannot replace human reviewers. Human review is not just about finding inconsistencies; it is about judging importance, weighing evidence, and understanding what truly advances a field. That requires prioritization and scientific intuition, not just pattern recognition.
Still, I have no doubt that AI will improve rapidly. Very soon, it may be able to answer my three questions with far greater precision than it can today. When that happens, AI will begin to do more than assist peer review: it will start to score the quality of a study in a systematic way. And when quality can be scored consistently, concepts such as “novelty” and “impact” may be revealed for what they often are—partly evidence-based judgments, but also partly personal opinions shaped by taste, prestige, and fashion.
That is why I think preprints scored by AI and circulated on social media may represent the future of scientific publication: immediate dissemination, visible evaluation, and potentially far lower cost to authors.
The more important question, then, is this: how would such a tool change scientific publishing? Here is my prediction.
1. AI will move from assistance to evaluation.
AI will not only review manuscripts and research proposals, but also generate structured scores for reproducibility, relevance, significance, and perhaps methodological rigor. Commercial versions will likely offer different levels of analysis depending on cost, from basic screening to full literature comparison and hypothesis generation.
2. Publishers, funders, and researchers will all adopt it for efficiency.
Funding agencies and scientific publishers will use AI tools to accelerate the review process and handle more submissions with fewer delays. Individual researchers will also subscribe to basic AI functions to improve manuscripts before submission and to identify weaknesses earlier in the writing process.
3. A new type of journal will emerge.
Eventually, journals may stop defining themselves mainly as gatekeepers. Instead, every submitted manuscript could receive a full AI-based review within minutes at low cost. The journal’s main job would then be different: not deciding whether a paper deserves to exist, but helping readers discover which papers are most worth reading based on significance, public interest, or personal relevance. Readers could also comment directly on studies, creating a more open and continuous evaluation system. In that sense, scientific publishing may begin to resemble YouTube or TikTok—not in superficiality, but in recommendation-driven discovery.
4. Institutions will normalize AI-reviewed preprints.
Research institutes and government agencies will subscribe to these tools and provide them to their members. Researchers will use them to review and score their own manuscripts before posting them to preprint servers, labeling them as “reviewed” or “evaluated.” If that system becomes standardized and trusted, more preprints may be indexed and taken seriously as part of the formal scientific record. Fewer studies will be trapped in endless review cycles, and less knowledge will be lost simply because it failed to clear the prestige threshold of a particular journal.
If this prediction turns out to be correct, then the pay-to-publish model is living on borrowed time. Once evaluation, dissemination, and discovery can be separated from traditional journal gatekeeping, the current system will be much harder to justify. Scientific publishing will not disappear, but its value proposition will have to change—from controlling access to knowledge toward organizing, evaluating, and recommending it.