This article used an open-source python repository for its analysis. It is well-suited for reproduction as more literature evolves on the intersection of urban planning and climate change. The adapted code is published alongside the article.
This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.
The direct numerical simulations (DNS) for this paper were conducted using Basilisk (http://basilisk.fr/). As Basilisk is a free software program written in C, it can be readily installed on any Linux machine, and it should be straightforward to then run the driver code to re-produce the DNS from this paper. Given this, the numerical solutions presented in this paper are a result of many high-fidelity simulations, which each took approximately 24 CPU hours running between 4 to 8 cores. Hence the difficulty in reproducing the results should mainly be in the amount of computational resources it would take, so HPC resources will be required. The DNS in this paper were used to validate the presented analytical solutions, as well as extend the results to a longer timescale. Reproducing these numerical results will build confidence in these results, ensuring that they are independent of the system architecture they were produced on.
Most of the material is available through Jupyter notebooks in GitHub, and it should be easy to reproduce with the help of Binder. With the notebooks, you could experiment with different parameters to the ones analyzed in the paper. It also contains a large dataset of physical parameters of galaxies analysed in this work. We expect this work to be easily reproducible in the steps described in the repository.
I tried hard to make this paper as reproducible as possible, but as techniques and dependencies become more complex, it is hard to make it 100% clear. Any form of feedback is more than welcome.
- This paper is a good example of a standard social science study that is (I hope!) fully reproducible, from main analysis, to supplementary analyses and figures. - I have not yet received any external feedback w.r.t. its reproducibility, so would be interested to see if I have overlooked any gaps in the reproduction workflow that I anticipated.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.
This paper shows a fun and interesting simulation result. I find it (of course) very important that our results are reproducible. In this paper, however, we did not include the exact code for these specific simulations, but the results should be reproducible using the code of our previous paper in PLOS Computational Biology (Van Oers, Rens et al. https://doi.org/10.1371/journal.pcbi.1003774). I am genuinely curious to see if there is sufficient information for the Biophys J paper or if we should have done better. Other people have already successfully built upon the 2014 (PLOS) paper using our code; see e.g., https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.012408 and https://doi.org/10.1101/701037).