I tried hard to make this paper as reproducible as possible, but as techniques and dependencies become more complex, it is hard to make it 100% clear. Any form of feedback is more than welcome.
Currently submitted paper on COVID19 on mental health. Unique clinical data (time series during the pandemic onset) & methods, hopefully fun to work on. Possibly too boring / easy to reproduce given my data & code? Not sure.
To see whether we did a good enough job in providing data and methods, and to check how the code has aged with respect to current libraries.
- This paper is a good example of a standard social science study that is (I hope!) fully reproducible, from main analysis, to supplementary analyses and figures. - I have not yet received any external feedback w.r.t. its reproducibility, so would be interested to see if I have overlooked any gaps in the reproduction workflow that I anticipated.
If all went right, the analysis should be fully reproducible without the need to make any adjustments. The paper aims to find optimal locations for new parkruns, but we were not 100% sure how 'optimal' should be defined. We provide a few examples, but the code was meant to be flexible enough to allow potential decision makers to specify their own, alternative objectives. The spatial data set is also quite interesting and fun to play around with. Cave: The full analysis takes a while to run (~30+ min) and might require >= 8gb ram.
Open data and reproducibility was important in this project.
It is a rare find of full reproducibility in the field of plant disease epidemiology.
Low Energy Electron Microscopy (LEEM) is a somewhat specific form of electron microscopy used to study surfaces and 2D materials. In this paper we describe a set of data processing techniques applied to LEEM and adapted to the peculiarities of LEEM. This is combined with a parallelized Python implementation using Dask in separate notebooks. So if you are interested in microscopy, image analysis, clustering of experimental physics data or parallel Python, this paper should be interesting to you.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.
We propose a simple method to retrieve optical constants from single optical transmittance measurements, in particular in the fundamental absorption region. The construction of needed envelopes is arbitrary and will depend on the user. However, the method should still be robust and deliver similar results.
This paper shows a fun and interesting simulation result. I find it (of course) very important that our results are reproducible. In this paper, however, we did not include the exact code for these specific simulations, but the results should be reproducible using the code of our previous paper in PLOS Computational Biology (Van Oers, Rens et al. https://doi.org/10.1371/journal.pcbi.1003774). I am genuinely curious to see if there is sufficient information for the Biophys J paper or if we should have done better. Other people have already successfully built upon the 2014 (PLOS) paper using our code; see e.g., https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.012408 and https://doi.org/10.1101/701037).
The format of the paper is a bit unusual: it is contained, and compiled as, an R package. Although this would seem, on its face, to make it easier to reproduce, it is an open question how obvious it will be. I wonder to what extent people reproducing the results would prefer this to simple R scripts.
We made a huge effort to ensure the paper is reproducible. But is it?
The original data took quite a while to produce for a previous paper, but for this paper, all tables and figures should be exactly reproducible by simply running the jupyter notebook.
This is a small dataset with a lot of missing data, so it's quite challenging to produce reliable results. It uses multiple imputation to fill the missing data, so it would be interesting to see whether the results hold up when this is redone. However, since the multiple imputation takes a couple of hours to run (on a decent laptop), the final multiply imputed data is also included. Additionally, multiply imputed data needs a different statistical analysis approach, which you can get familiar with.
We've tried to make it as easy as possible to reproduce. There's some fun physics on the paper and it's all done with Python!
Complex analyses over multiple variables. In press, so we can still fix errors ahead of publication!!
I guess it could be a cool learning experience. The paper is written with knitr, uses a seed, is part of the R package it describes, was openly written using version control (SVN, R-Forge) and is available in an open access journal (@up_jors).
The focus of the project is reproducibility. Here we show the differences to access data compared to similar initiatives: https://ropensci.org/blog/2019/05/09/tradestatistics/. Also, similar projects have obscure parts, while our exposes the code from raw data downloading to dashboard creation.
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.