The contemporary scientific publication business is driven by impact factor (IF). By definition, IF = (#citations in a given year)/(#papers published in the previous two years) = attention received per paper published by this journal. In another word, it is a measurement of popularity, or meme. Research institutes and funding agencies use IF of the journals where the researcher published her/his papers to evaluate research performance. Everyone who follows social media understands that the most efficient way to get attention is to copy the meme, instead of creating a new idea. Ironically, this is exactly happening in academics: popularity is more important than innovation. In fact, it's an open secret that the major job of the editor is to increase IF of the journal. High IF attracts more high-meme manuscripts, forming the self-enforcing cycle.
Is it possible to break this self-enforcing cycle, bring innovation back to academic research? I believe so. A meme stops to spread when the viewers become tired of repetitive information, starting to search the new meme. Curiosity is the word to break meme. How to arise curiosity? Comparison.
Imagine this new style of scientific journal/search engine: You ask a question, and the "journal" returns dozens of papers, including new ones and those published in other journals, each one equipped with relevance score (which one answers your question the best), rigor score, reproducibility scores (given by AI review), technical novelty score (by evaluating difference from previous studies), and clarity score (also reviewed by AI). When you find that some preprints get higher scores than those published in high-impact journals, won't you become curious to check it? Importantly, these scores do not evaluate creativity and conceptual novelty. The readers can justify it by themselves. I'd like to call this "win back the creativity for science".
The platform closest to this idea, to what I can see, is peerAI App (though only limitedly). Please give it a try.
https://peerai.app/