Investigating the replicability of preclinical cancer biology


Review this paper

Submitted by samuelpawel

May 13, 2022, 11:52 a.m.

Investigating the replicability of preclinical cancer biology

Timothy M Errington, Maya Mathur, Courtney K Soderberg, Alexandria Denis, Nicole Perfito, Elizabeth Iorns, Brian A Nosek
Errington, T. M., Mathur, M., Soderberg, C. K., Denis, A., Perfito, N., Iorns, E., and Nosek, B. A. (2021). Investigating the replicability of preclinical cancer biology. eLife Sciences Publications, Ltd. https://doi.org/10.7554/elife.71601
DOI:  10.7554/eLife.71601          

Brief Description
Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary – the replication was either a success or a failure – and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.

The analyses from the paper are conducted in the statistical programming language R.
Why should we reproduce your paper?
This papers represents an important milestone in meta-science, as it is one of the first large-scale replication projects outside the social sciences.
What should reviewers focus on?
Reviewers can start by reproducing the figures and tables from the paper. If time permits, they can also try to reproduce analyses at the study level. Data and materials are available at both project and study level.

Resources

  Code URL: https://osf.io/squy7/

Associated event