Conducting Research Fast and Slow: The Importance of Valuing Scientific Process over Scientific Products

Conducting Research Fast and Slow: The Importance of Valuing Scientific Process over Scientific Products

Jonathan M. Fawcett & Heath E. Matheson

As graduate students in the mid-2000’s, our academic careers began during a tumultuous period of technological growth, when electronic communication was just achieving mainstream appeal. In particular, the newfound ability to connect instantaneously with scientists anywhere in the world promised new, exciting opportunities to accelerate the production and communication of scientific findings. However, amidst the achievements of the digital age we have also detected deeper alterations in our course driven by an ever-changing scientific culture. One such change surrounds our current obsession with “impact” as the core metric against which scientific advancement has been gauged. Although not new, we believe this obsession has been stoked in recent years by the heightened visibility of scientific findings (e.g., due to social media), granting policy makers and the general public (whose opinions shape funding decisions) front-row seats within the scientific forum. However, whereas these individuals are well-poised to gauge the apparent implications of a given project, those implications (which form the basis of what we refer to as impact, along with the prestige of the venue in which they have been published) are often orthogonal to the quality of the work behind the claims (including the methodological basis on which they have been made). The net result is that the success and perceived utility of a given research program – in the eyes of the public – is of the utmost importance to that program’s survival. 

As a consequence, academia has shifted to emphasize the production of high-impact projects of great social interest. To many, this shift seems like a good idea, emphasizing “big” discoveries capable of redefining entire fields and improving our quality of life. However, scientists have increasingly realized that yoking the success – and even survival – of those conducting such research to the production of a never-ending stream of ground-breaking discoveries is unsustainable given the unpredictable nature of scientific inquiry. Worse yet, it contributes to a dangerous culture of competition favoring poor methodological practice in the pursuit of career advancement and/or fame. We are already experiencing the fallout of these incentives in the form of data fraud, non-replicable findings, and questionable research practices (QRPs).1 Whilst harmful, these behaviors remain predictive of success: Studies have repeatedly found QRPs to be predictive of positive career outcomes, and computer simulations modelling the link between rigour and scientific output have found that simulated laboratories with poor standards tend to be advantaged due to the rapid output of high impact results.2 Indeed, certain QRPs (e.g., p-hacking) are so common that they have been called the “steroids of scientific competition”.3

In our view, the ensuing pressures have encouraged research that is inherently performative rather than demonstrative, intending to communicate sophistication and invoke community interest rather than uncover verifiable knowledge. Ultimately, this approach may be considered scientific theatre in the same sense that most airport security measures are considered security theatre:4 Whereas the latter are widely viewed by security experts as displays intended to emulate security rather than ensure actual security, the former may be viewed as displays intended to emulate progress rather than ensure actual progress. To be clear, we are not saying that public engagement is inherently negative – but rather engagement and/or impact without a solid methodological foundation is dangerous, undermining our credibility as scientists. Scientific theatre refers to the pursuit of either for its own sake or for an incentive (e.g., fame, funding) orthogonal to the integrity of the finding itself.

Given challenges to mental health and the competitive pressures of the scientific marketplace, early career researchers (ECRs) may be persuaded to engage in scientific theatre to ensure their survival.5 It is the only way to guarantee a stream of impactful publications. However, granting agencies, the public, and scientists themselves must balance success with failure. Because research is a stochastic process involving a great deal of luck,6 scientists have limited control over the outcome of their work, making a brief sequence of “successful” research articles little more informative than counting the heads in a small series of coin tosses; many well-conceived projects will fail to produce attractive results.7 This is especially salient to ECRs, for whom a smaller number of “tosses” have been made. Whilst non-significant outcomes are increasingly accepted, it remains less impressive in the current culture to tweet about confusing or diminutive findings. 

Many have grappled with these or similar issues, advocating (as do we) that rigour must be considered alongside impact.8 We also agree that society must abandon its inherent fame heuristic, which involves using the prestige of the institution or journal from whence a finding comes as a substitute for a thorough evaluation of its method. However, this is not a trivial goal. Specifically, ECRs find themselves caught in a variation of the Prisoner’s Dilemma, which we call the Scientist’s Dilemma: Our science would benefit from a slower approach focused on rigorous methods, high-quality training and cumulative advancement, but doing so in the present climate without broad support is personally dangerous, and those adopting a slower, more rigorous pace are at a disadvantage compared to others opting to pursue scientific theatre. The best approach to “winning” the game of science is still in the production of influential papers – regardless of their quality or reproducibility. The optimal solution instead requires everyone to adopt more rigorous standards and assume others are doing the same, despite the personal advantage of doing otherwise.

As young researchers, this situation is daunting, and the solution remains unclear, but is likely to necessitate change on at least two levels. First, the harmful incentive structure within science, created by a culture of what might be called “edutainment,” needs to be addressed at the institutional level, where university administration and granting agencies must recognize the stochastic nature of scientific inquiry and be patient with the cumulation of innovation. Second, scientists must evaluate their attitudes and the scientific culture they transmit to their students. Indeed, the motivations that compel individuals to a life of science differ, and whilst some people value honesty, transparency, and rigour, others explicitly relish the cut-throat competitive nature of being the first to a discovery (as has been confided to us when discussing this work at conferences). Without a broad assessment of the value systems of scientists, and without addressing the question of the purpose of research, the scientist's dilemma will likely remain. Re-orienting the scientific machine means agreeing that research has a purpose separate from its output and that there are greater benefits in valuing the process itself than in treating its products as a mechanism to promote personal fame.


  1. Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7(6), 528-530.
  2. Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science3, 160384.
  3. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling. Psychological Science23(5), 524–532. 
  4. DeVault, A., Miller, M. K., & Griffin, T. (2016). Crime Control Theater: Past, Present, and Future. Psychology, Public Policy, and Law, 22(4), 341–348.
  5. Fang, F. C., & Casadevall, A. (2015). Competitive science: is competition ruining science? Infection and Immunology83, 1229-33. 
  6. Sinatra, R., Wang, D., Deville, P., Song, C., & Barabási, A.-L. (2016). Quantifying the evolution of individual scientific impact. Science,354, 596–603.
  7. Gelman, A. (2017). Ethics and Statistics: Honesty and Transparency Are Not Enough. Chance,30(1), 37-39.
  8. Ioannidis, J. P., Fanelli, D., Dunne, D. D., & Goodman, S. N. (2015). Meta-research: evaluation and improvement of research methods and practices. PLoS Biology13(10), e1002264.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in