If all went right, the analysis should be fully reproducible without the need to make any adjustments. The paper aims to find optimal locations for new parkruns, but we were not 100% sure how 'optimal' should be defined. We provide a few examples, but the code was meant to be flexible enough to allow potential decision makers to specify their own, alternative objectives. The spatial data set is also quite interesting and fun to play around with. Cave: The full analysis takes a while to run (~30+ min) and might require >= 8gb ram.
Open data and reproducibility was important in this project.
It is a rare find of full reproducibility in the field of plant disease epidemiology.
Low Energy Electron Microscopy (LEEM) is a somewhat specific form of electron microscopy used to study surfaces and 2D materials. In this paper we describe a set of data processing techniques applied to LEEM and adapted to the peculiarities of LEEM. This is combined with a parallelized Python implementation using Dask in separate notebooks. So if you are interested in microscopy, image analysis, clustering of experimental physics data or parallel Python, this paper should be interesting to you.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.
We propose a simple method to retrieve optical constants from single optical transmittance measurements, in particular in the fundamental absorption region. The construction of needed envelopes is arbitrary and will depend on the user. However, the method should still be robust and deliver similar results.
This paper shows a fun and interesting simulation result. I find it (of course) very important that our results are reproducible. In this paper, however, we did not include the exact code for these specific simulations, but the results should be reproducible using the code of our previous paper in PLOS Computational Biology (Van Oers, Rens et al. https://doi.org/10.1371/journal.pcbi.1003774). I am genuinely curious to see if there is sufficient information for the Biophys J paper or if we should have done better. Other people have already successfully built upon the 2014 (PLOS) paper using our code; see e.g., https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.012408 and https://doi.org/10.1101/701037).
The format of the paper is a bit unusual: it is contained, and compiled as, an R package. Although this would seem, on its face, to make it easier to reproduce, it is an open question how obvious it will be. I wonder to what extent people reproducing the results would prefer this to simple R scripts.
We made a huge effort to ensure the paper is reproducible. But is it?
The original data took quite a while to produce for a previous paper, but for this paper, all tables and figures should be exactly reproducible by simply running the jupyter notebook.
This is a small dataset with a lot of missing data, so it's quite challenging to produce reliable results. It uses multiple imputation to fill the missing data, so it would be interesting to see whether the results hold up when this is redone. However, since the multiple imputation takes a couple of hours to run (on a decent laptop), the final multiply imputed data is also included. Additionally, multiply imputed data needs a different statistical analysis approach, which you can get familiar with.
We've tried to make it as easy as possible to reproduce. There's some fun physics on the paper and it's all done with Python!
Complex analyses over multiple variables. In press, so we can still fix errors ahead of publication!!
I guess it could be a cool learning experience. The paper is written with knitr, uses a seed, is part of the R package it describes, was openly written using version control (SVN, R-Forge) and is available in an open access journal (@up_jors).
The focus of the project is reproducibility. Here we show the differences to access data compared to similar initiatives: https://ropensci.org/blog/2019/05/09/tradestatistics/. Also, similar projects have obscure parts, while our exposes the code from raw data downloading to dashboard creation.
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.
This preprint is an attempt to reproduce Google Flu Trend in the Netherlands. The whole paper + code is meant to be easily reproducible and transferable to other countries and/or areas. If you are familiar with time series data, lasso regression and cross validation, the analysis should be straight forward. If anyone is interested, I could also provide influenza data for other European countries.
This paper is reproduced weekly in a docker container on continuous integration, but it is also set up to work via local installs as well. It would be interesting to see if it's reproducible with a human operator who knows nothing of the project or toolchain.
Tell me what I can improve on; maybe think of other visualisations for data?
Tell me what I should improve!