The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field.
This paper is fully reproducible; we provide the protocol that the different modelers used, the data produced from these models, the observed data, and the code to run the analysis that led to the results of the paper, figures, and text. I have not come across any other paper in forestry that is as fully reproducible as our paper, so it might also be a rare example in this field and hopefully a motivation to others to do so. Please notice that we do not provide the models that were used to run the simulations, as these are the results used (or data collection), but we do provide the data resulting from these simulations.
In this paper, an R package was used to improve the reproducibility of the analyses. Therefore, it would be good to know to what extent this works. The R package includes the following analyses: (1) data trimming and preparation, (2) descriptive statistics, (3) reliability and correlations, (4) t-tests and Bayesian t-tests, (5) latent-change models (structural equation modeling approach), and (6) multiverse analyses. Furthermore, all deidentified data, experiment codes, research materials, and results are publicly accessible on the Open Science Framework (OSF) at https://osf.io/ngfxv. The study’s design and the analyses were pre-registered on OSF. The preregistration can be accessed at https://osf.io/ tywu7.
I used a lot of different tools and strategies to make this paper easily reproducible at different levels. There's Docker container for the highest level of reproducibility, and package versions are managed with renv. The data used in the paper is hosted on Zenodo to avoid long queue times when downloading from the Climate Data Store and future-proof for when it goes away and checksumed before using it.
We spend a lot of time to make our analyses reproducible. A review would allow us to collect some information on whether we are successful with it.
This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.
We think this is an interesting paper for anyone who wants to learn to build an API with the R package plumber. This is a novel method in health economics, but we believe will help improve the transparency of modelling methods in our field.
The code and data are both on GitHub. The paper has been published in Wellcome Open Research and has been replicated by multiple other authors.
This paper proposes a probabilistic planner that can solve goal-conditional tasks such as complex continuous control problems. The approach reaches state-of-the-art performance when compared to current deep reinforcement learning algorithms. However, the method relies on an ensemble of deep generative models and is computationally intensive. It would be interesting to reproduce the results presented in this paper on their robotic manipulation and navigation problems as these are very challenging problems that current reinforcement learning methods cannot easily solve (and when they do, they require a significantly larger number of experiences). Can the results be reproduced out-of-the-box with the provided code?
Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!
Basic analyses, which are easy to understand and reproduce + the paper contains multiple imputation, which can be interesting; ALL materials are available
It was a null findings paper that disappointed many people. Could I have made a mistake in the coding?; I'm interested in using it as an example of reproducible research and learning from ReproHack. It's nerve wracking to submit for inspection from others so I also want to overcome that fear and be able to lead my students by example. I'll be available via the Slack group or other forms for communication as suggested by organisers. Please note it's only the gene expression and related data that's available on ArrayExpress.
This is perhaps an interesting 'meta' example for ReproHack as in this study we attempted to reproduce analyses reporrted in 25 published articles. So it seems even more important that our own analyses are reproducible! We tried our best to adhere to best practices in this regard, so we would be very keen to know if anyone has problems reproducing our analyses and/or learning how we can make the process easier. A couple of things to note: 1. In addition to the links to the data and analysis scripts provided above, we also have a Code Ocean container for this article (https://doi.org/10.24433/CO.1796004.v3), which should theoretically allow you to reproduce the analyses with the click of a single button (we hope!). 2. In addition to the main research analyses (for which I've provided links above), we also have data, scripts, and Code Ocean containers for each of the reprodubility attempts for the 25 articles we looked at. I don't know if you will also want to look at this level of the analyses, but if you do then take a look at Supplementary Information section E here: https://royalsocietypublishing.org/doi/suppl/10.1098/rsos.201494 For each reproducibility attempt, there is a short 'vignette' describing the outcome, and a link to data/scripts on the OSF and a Code Ocean container.
I suggested a few papers last year. I’m hoping that we’ve improved our reproducibility with this one, this year. We’ve done our best to package it up both in Docker and as an R package. I’d be curious to know what the best way to reproduce it is found to be. Working through vignettes or spinning up a Docker instance. Which is the preferred method?
It is kind of an easy reproducible code. It reads the data, makes few descriptive statistical analysis and plots figures using ggplot2.
Cleaning the databases used for this study was one of the most challenging aspects of it, so making it public is the best way to make the more out of it. We made an effort to document all analyses and data wrangling steps. We are interested to know if it is truly reproducible so that we can follow this same scheme for further projects, or adjust accordingly.
The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field. Therefore, two things in the paper can be reproduced: 1) workflow results: Table 2 contains links to all the Jupyter notebooks used to calculate the results. Computations are long and might require a server, so if you want to run them locally, I recommend using only 2 or 3 images as inputs for the computations. Also, the paper should be sufficient, but if you need further introductory info, there are a documentation website: https://sbonaretti.github.io/pyKNEEr/ and a "how to" video: https://youtu.be/7WPf5KFtYi8 2) paper graphs: In the captions of figures 1, 4, and 5 you can find links to data repository, code (a Jupyter notebook), and the computational environment (binder) to fully reproduce the graph. These computations can be easily run locally and require a few seconds. All Jupyter notebooks automatically download data from Zenodo and provide dependencies, which should make reproducibility easier.
This paper provides a novel approach to identifying oncogenes based on RNA overexpression in subsets of tumor relative to adjacent normal tissue. Showing that this study can be reproduced would aid other researchers who are attempting to identify oncogenes in other cancer types using the same methodology.
It'll a great helpful to independently check the scientific record I've published, so that errors, if there are any, could be corrected. Also, I will learn how to share the data in a more accessible to other if you could give me feedback.
Currently submitted paper on COVID19 on mental health. Unique clinical data (time series during the pandemic onset) & methods, hopefully fun to work on. Possibly too boring / easy to reproduce given my data & code? Not sure.