Papers



Submit a Paper!

Browse ReproHack papers

  • pyKNEEr: An image analysis workflow for open and reproducible research on femoral knee cartilage

    Authors: S Bonaretti, G. Gold, G. Beaupre
    DOI: https://doi.org/10.1371/journal.pone.0226501
    Submitted by sbonaretti      

    Why should we attempt to reproduce this paper?

    The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field.

    Tags: Python R
  • Tree regeneration in models of forest dynamics: A key priority for further research

    Authors: Olalla Díaz‐Yáñez; Yannek Käber; Tim Anders; Friedrich Bohn; Kristin H. Braziunas; Josef Brůna; Rico Fischer; Samuel M. Fischer; Jessica Hetzer; Thomas Hickler et al.
    DOI: 10.1002/ecs2.4807
    Submitted by odiazyanez    
    Number of reviews:   1
    Why should we attempt to reproduce this paper?

    This paper is fully reproducible; we provide the protocol that the different modelers used, the data produced from these models, the observed data, and the code to run the analysis that led to the results of the paper, figures, and text. I have not come across any other paper in forestry that is as fully reproducible as our paper, so it might also be a rare example in this field and hopefully a motivation to others to do so. Please notice that we do not provide the models that were used to run the simulations, as these are the results used (or data collection), but we do provide the data resulting from these simulations.

  • The Interplay of Time-of-day and Chronotype Results in No General and Robust Cognitive Boost

    Authors: Alodie Rey-Mermet, Nicolas Rothen
    DOI: https://doi.org/10.1525/collabra.88337
    Submitted by areyme      

    Why should we attempt to reproduce this paper?

    In this paper, an R package was used to improve the reproducibility of the analyses. Therefore, it would be good to know to what extent this works. The R package includes the following analyses: (1) data trimming and preparation, (2) descriptive statistics, (3) reliability and correlations, (4) t-tests and Bayesian t-tests, (5) latent-change models (structural equation modeling approach), and (6) multiverse analyses. Furthermore, all deidentified data, experiment codes, research materials, and results are publicly accessible on the Open Science Framework (OSF) at https://osf.io/ngfxv. The study’s design and the analyses were pre-registered on OSF. The preregistration can be accessed at https://osf.io/ tywu7.

  • Revisiting the zonally asymmetric extratropical circulation of the Southern Hemisphere spring using complex empirical orthogonal functions

    Authors: Elio Campitelli, Leandro Díaz, Carolina Vera
    DOI: 10.1007/s00382-023-06780-0
    Submitted by eliocamp      
      Mean reproducibility score:   1.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    I used a lot of different tools and strategies to make this paper easily reproducible at different levels. There's Docker container for the highest level of reproducibility, and package versions are managed with renv. The data used in the paper is hosted on Zenodo to avoid long queue times when downloading from the Climate Data Store and future-proof for when it goes away and checksumed before using it.

    Tags: R Docker climate
  • The Polar Transition from Alpha to Beta Regions Set by a Surface Buoyancy Flux Inversion

    Authors: Romain Caneill Fabien Roquet Gurvan Madec Jonas Nycander
    DOI: 10.1175/JPO-D-21-0295.1
    Submitted by rcaneill      
      Mean reproducibility score:   0.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    I tried hard to make it reproducible, so hopefully this paper can serve as an example on how reproducibility can be achieved. I think that being reproducible with only few commands to type in a terminal is quite an achievment. At least in my field, where I usually see code published along with paper, but with almost no documentation on how to rerun it.

  • A multi-level analysis of data quality for formal software citation

    Authors: David Schindler, Tazin Hossain, Sascha Spors, Frank Krüger
    DOI: https://doi.org/10.48550/arXiv.2306.17535
    Submitted by frank.krueger    
      Mean reproducibility score:   9.0/10   |   Number of reviews:   2
    Why should we attempt to reproduce this paper?

    We spend a lot of time to make our analyses reproducible. A review would allow us to collect some information on whether we are successful with it.

  • What do analyses of city size distributions have in common?

    Authors: Clémentine Cottineau
    DOI: 10.1007/s11192-021-04256-8
    Submitted by clementinecottineau      
      Mean reproducibility score:   8.5/10   |   Number of reviews:   2
    Why should we attempt to reproduce this paper?

    This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.

  • Living HTA: Automating Health Technology Assessment with R

    Authors: Robert A. Smith, Paul P. Schneider, Wael Mohammed
    DOI: 10.12688/wellcomeopenres.17933.1
    Submitted by rasmith3    

    Why should we attempt to reproduce this paper?

    We think this is an interesting paper for anyone who wants to learn to build an API with the R package plumber. This is a novel method in health economics, but we believe will help improve the transparency of modelling methods in our field.

  • Does ethnic density influence community participation in mass participation physical activity events?

    Authors: Robert A. Smith, Paul P. Schneider, Alice Bullas, Steve Haake, Helen Quirk, Rami Cosulich1, Elizabeth Goyder
    DOI: 10.12688/wellcomeopenres.15657.2
    Submitted by rasmith3    
      Mean reproducibility score:   9.2/10   |   Number of reviews:   5
    Why should we attempt to reproduce this paper?

    The code and data are both on GitHub. The paper has been published in Wellcome Open Research and has been replicated by multiple other authors.

  • Optimizing the Use of Carbonate Standards to Minimize Uncertainties in Clumped Isotope Data

    Authors: Ilja J. Kocken, Inigo A. Müller, Martin Ziegler
    DOI: 10.1029/2019GC008545
    Submitted by japhir      

    Why should we attempt to reproduce this paper?

    Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!

  • No Effect of Nature Representations on State Anxiety, Actual and Perceived Noise

    Authors: Korbmacher, M., & Wright, L.
    DOI: 10.31234/osf.io/8gtyq
    Submitted by hub-admin    
      Mean reproducibility score:   5.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    Basic analyses, which are easy to understand and reproduce + the paper contains multiple imputation, which can be interesting; ALL materials are available

    Tags: R
  • Genomic Response to Vitamin D Supplementation in the Setting of a Randomized, Placebo-Controlled Trial

    Authors: Berlanga-Taylor, A. J., Plant, K., Dahl, A., Lau, E., Hill, M., Sims, D., Heger, A., et al.
    Submitted by hub-admin  
    Number of reviews:   1
    Why should we attempt to reproduce this paper?

    It was a null findings paper that disappointed many people. Could I have made a mistake in the coding?; I'm interested in using it as an example of reproducible research and learning from ReproHack. It's nerve wracking to submit for inspection from others so I also want to overcome that fear and be able to lead my students by example. I'll be available via the Slack group or other forms for communication as suggested by organisers. Please note it's only the gene expression and related data that's available on ArrayExpress.

    Tags: Python R
  • Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study

    Authors: Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B. N., deMayo, B., Long, B., Yoon, E. J., & Frank, M. C.
    DOI: 10.1098/rsos.201494
    Submitted by hub-admin    
      Mean reproducibility score:   9.7/10   |   Number of reviews:   3
    Why should we attempt to reproduce this paper?

    This is perhaps an interesting 'meta' example for ReproHack as in this study we attempted to reproduce analyses reporrted in 25 published articles. So it seems even more important that our own analyses are reproducible! We tried our best to adhere to best practices in this regard, so we would be very keen to know if anyone has problems reproducing our analyses and/or learning how we can make the process easier. A couple of things to note: 1. In addition to the links to the data and analysis scripts provided above, we also have a Code Ocean container for this article (https://doi.org/10.24433/CO.1796004.v3), which should theoretically allow you to reproduce the analyses with the click of a single button (we hope!). 2. In addition to the main research analyses (for which I've provided links above), we also have data, scripts, and Code Ocean containers for each of the reprodubility attempts for the 25 articles we looked at. I don't know if you will also want to look at this level of the analyses, but if you do then take a look at Supplementary Information section E here: https://royalsocietypublishing.org/doi/suppl/10.1098/rsos.201494 For each reproducibility attempt, there is a short 'vignette' describing the outcome, and a link to data/scripts on the OSF and a Code Ocean container.

    Tags: R
  • The role of conidia in the dispersal of Ascochyta rabiei

    Authors: Khaliq, I., Fanning, J., Melloy, P. et al.
    DOI: 10.1007/s10658-020-02126-2
    Submitted by hub-admin    

    Why should we attempt to reproduce this paper?

    I suggested a few papers last year. I’m hoping that we’ve improved our reproducibility with this one, this year. We’ve done our best to package it up both in Docker and as an R package. I’d be curious to know what the best way to reproduce it is found to be. Working through vignettes or spinning up a Docker instance. Which is the preferred method?

    Tags: R Docker
  • Unveiling the diversity of spatial data infrastructures in Latin America: evidence from an exploratory inquiry

    Authors: Luis M. Vilches-Blázquez & Daniela Ballari
    DOI: 10.1080/15230406.2020.1772113
    Submitted by hub-admin    
      Mean reproducibility score:   10.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    It is kind of an easy reproducible code. It reads the data, makes few descriptive statistical analysis and plots figures using ggplot2.

    Tags: R
  • Evolutionary and food supply implications of ongoing maize domestication by Mexican campesinos

    Authors: Bellon, M. R., Mastretta-Yanes, A., Ponce-Mendoza, A., Ortiz-Santamaría, D., Oliveros-Galindo, O., Perales, H., … Sarukhán, J.
    DOI: 10.1098/rspb.2018.1049
    Submitted by hub-admin    
      Mean reproducibility score:   6.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    Cleaning the databases used for this study was one of the most challenging aspects of it, so making it public is the best way to make the more out of it. We made an effort to document all analyses and data wrangling steps. We are interested to know if it is truly reproducible so that we can follow this same scheme for further projects, or adjust accordingly.

    Tags: R
  • pyKNEEr: An image analysis workflow for open and reproducible research on femoral knee cartilage

    Authors: Bonaretti S, Gold GE, Beaupre GS
    DOI: 10.1371/journal.pone.0226501
    Submitted by hub-admin    
      Mean reproducibility score:   6.5/10   |   Number of reviews:   2
    Why should we attempt to reproduce this paper?

    The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field. Therefore, two things in the paper can be reproduced: 1) workflow results: Table 2 contains links to all the Jupyter notebooks used to calculate the results. Computations are long and might require a server, so if you want to run them locally, I recommend using only 2 or 3 images as inputs for the computations. Also, the paper should be sufficient, but if you need further introductory info, there are a documentation website: https://sbonaretti.github.io/pyKNEEr/ and a "how to" video: https://youtu.be/7WPf5KFtYi8 2) paper graphs: In the captions of figures 1, 4, and 5 you can find links to data repository, code (a Jupyter notebook), and the computational environment (binder) to fully reproduce the graph. These computations can be easily run locally and require a few seconds. All Jupyter notebooks automatically download data from Zenodo and provide dependencies, which should make reproducibility easier.

  • A novel approach to modelling transcriptional heterogeneity identifies the oncogene candidate CBX2 in invasive breast carcinoma

    Authors: Piqué, D.G., Montagna, C., Greally, J.M. et al.
    Submitted by hub-admin    
      Mean reproducibility score:   4.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    This paper provides a novel approach to identifying oncogenes based on RNA overexpression in subsets of tumor relative to adjacent normal tissue. Showing that this study can be reproduced would aid other researchers who are attempting to identify oncogenes in other cancer types using the same methodology.

    Tags: R
  • Good Me Bad Me: Prioritization of the Good-Self During Perceptual Decision-Making

    Authors: Hu, C.-P., Lan, Y., Macrae, C. N., & Sui, J.
    DOI: 10.1525/collabra.301
    Submitted by hub-admin    
      Mean reproducibility score:   7.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    It'll a great helpful to independently check the scientific record I've published, so that errors, if there are any, could be corrected. Also, I will learn how to share the data in a more accessible to other if you could give me feedback.

    Tags: Python R Matlab
  • Mental Health and Social Contact During the COVID-19 Pandemic: An Ecological Momentary Assessment Study

    Authors: Eiko I. Fried, Faidra Papanikolaou, Sacha Epskamp
    DOI: 10.31234/osf.io/36xkp
    Submitted by hub-admin    
      Mean reproducibility score:   7.6/10   |   Number of reviews:   5
    Why should we attempt to reproduce this paper?

    Currently submitted paper on COVID19 on mental health. Unique clinical data (time series during the pandemic onset) & methods, hopefully fun to work on. Possibly too boring / easy to reproduce given my data & code? Not sure.

    Tags: R

Search for papers

Filter by tags

Python R GDAL GEOS GIS Shiny PROJ Galaxies Astronomy HPC Databases Binder Social Science Stata make Computer Science Jupyter Notebook tidyverse emacs literate earth sciences clumped isotopes org-mode geology eyetracking LaTeX Git ArcGIS Docker Drake SVN knitr C Matlab Mathematica Meta-analysis swig miniconda tensorflow keras Pandas SQL neuroscience robotics deep learning planner reiforcement learning Plasma physics Hybrid-PIC EPOCH Laser Gamma-ray X-ray radiation Petawatt Fortran plasma PIC physics Monte Carlo Atomistic Simulation LAMMPS Electron Transport DFT descriptors interatomic potentials machine learning Molecular Dynamics Python scripting AIRSS structure prediction density functional theory high-throughput machine-learning RNA bioinformatics CFD Fluid Dynamics OpenFOAM C++ DNS Mathematics Droplets Basilisk Particle-In-Cell psychology Stan Finance SAS Replication crisis Economics Malaria consumer behavior number estimation mental arithmetic psychophysics Archaeology Precipitation Epidemiology Parkrun Health Health Economics HTA plumber science of science Zipf networks city size distribution urbanism literature review Preference Visual Questionnaire Mann-Whitney Correlation Conceptual replication Cognitive psychology Multinomial processing tree (MPT) modeling #urbanism #R k-means cluster analysis city-regions Urban Knowledge Systems Topic modelling Planning Support Systems Software Citation Quarto snakemake Numerical modelling Ocean climate physical oceanography apptainer oceanography R package structural equation modeling bayes factor Forest Simulations Models of forest dynamics multi-lab study mice mechanics growth Tissue Cells Clustering Expectation-Maximization bootstrapping R software Position Weight Matrices singularity neuroimaging effect size biology replicability cancer reproducibility csv osf preclinical research genomics All tags Clear tags

Key

  Associated with an event
  Available for general review
  Public reviews welcome