This paper shows a fun and interesting simulation result. I find it (of course) very important that our results are reproducible. In this paper, however, we did not include the exact code for these specific simulations, but the results should be reproducible using the code of our previous paper in PLOS Computational Biology (Van Oers, Rens et al. https://doi.org/10.1371/journal.pcbi.1003774). I am genuinely curious to see if there is sufficient information for the Biophys J paper or if we should have done better. Other people have already successfully built upon the 2014 (PLOS) paper using our code; see e.g., https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.012408 and https://doi.org/10.1101/701037).
The format of the paper is a bit unusual: it is contained, and compiled as, an R package. Although this would seem, on its face, to make it easier to reproduce, it is an open question how obvious it will be. I wonder to what extent people reproducing the results would prefer this to simple R scripts.
We made a huge effort to ensure the paper is reproducible. But is it?
The original data took quite a while to produce for a previous paper, but for this paper, all tables and figures should be exactly reproducible by simply running the jupyter notebook.
This is a small dataset with a lot of missing data, so it's quite challenging to produce reliable results. It uses multiple imputation to fill the missing data, so it would be interesting to see whether the results hold up when this is redone. However, since the multiple imputation takes a couple of hours to run (on a decent laptop), the final multiply imputed data is also included. Additionally, multiply imputed data needs a different statistical analysis approach, which you can get familiar with.
We've tried to make it as easy as possible to reproduce. There's some fun physics on the paper and it's all done with Python!
Complex analyses over multiple variables. In press, so we can still fix errors ahead of publication!!
I guess it could be a cool learning experience. The paper is written with knitr, uses a seed, is part of the R package it describes, was openly written using version control (SVN, R-Forge) and is available in an open access journal (@up_jors).
The focus of the project is reproducibility. Here we show the differences to access data compared to similar initiatives: https://ropensci.org/blog/2019/05/09/tradestatistics/. Also, similar projects have obscure parts, while our exposes the code from raw data downloading to dashboard creation.
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.
This preprint is an attempt to reproduce Google Flu Trend in the Netherlands. The whole paper + code is meant to be easily reproducible and transferable to other countries and/or areas. If you are familiar with time series data, lasso regression and cross validation, the analysis should be straight forward. If anyone is interested, I could also provide influenza data for other European countries.
This paper is reproduced weekly in a docker container on continuous integration, but it is also set up to work via local installs as well. It would be interesting to see if it's reproducible with a human operator who knows nothing of the project or toolchain.
Tell me what I can improve on; maybe think of other visualisations for data?
Tell me what I should improve!
This is a fairly digestible paper with statistical analyses and data visualization that rely heavily on open data from citizen science projects.
This will probably be a non-trivial example to reproduce, owing to: (1) long-running code, (2) dependency on external data sources, (3) possibly challenging software dependencies -- both trivial ones (e.g. setting up custom fonts and plot themes) and critical ones (requires an external R package wrapping a C++ algorithm, not available on CRAN and can sometimes have interesting compiler issues, like when Apple decided to break the clang compiler in 10.0). Ideally one could just run the R code given in the appendix on your local R session, but that may take a bit of effort. We've tried to take steps to address those issues by providing caches of slow-running parts, providing a docker container, and providing sufficient annotations, but who knows!
This is one of the very few papers in biomolecular simulation for which code and data are available and which should be reproducible. But it is also three years old, so it is an interesting test case for the longevity of reproducible research. The infrastructure software is available at http://www.activepapers.org/python-edition/ (with instructions for installation and use)
I believe this represents the only example of a reproducible paper from scattering data collected at Diamond Light Source (UK) and the Institute Laue-Langevin (France)
There are data and code written in RMarkdown which allows to reproduce the entire analysis and plots shown of the paper. It also allows to generate HTML document, which is a nice interface that facilitates the reader to understand better why some procedures were adopted and how to run them.
This is a two-for one. The repository contains code for companion papers, the model development and the model implementation and analysis. As the repository notes, some data are not freely available so I've made an effort to allow the paper to be replicated as best possible with what's available.