There is a numerical benchmark reported in Fig. 4 with absolute runtimes and memory usages that can directly be reproduced with the provided source code. The benchmark was performed on the author's computer, and since numerical performance and parallel scaling can be somewhat hardware-dependent, it would be of interest to see whether a performance that is comparable to the one reported in the paper can be reproduced by others on their own computers in practice. The benchmark simulates a growing tissue from one to 10,000 cells in just ten minutes, so this offers an easy entry point into tissue modeling and simulation. No input data is needed to reproduce the output. The program has no dependencies.
We spend a lot of time to make our analyses reproducible. A review would allow us to collect some information on whether we are successful with it.
Metadata annotation is key to reproducibility in sequencing experiments. Reproducing this research using the scripts provided will also show the current level of annotation in years since 2015 when the paper was published.
- This paper is a good example of a standard social science study that is (I hope!) fully reproducible, from main analysis, to supplementary analyses and figures. - I have not yet received any external feedback w.r.t. its reproducibility, so would be interested to see if I have overlooked any gaps in the reproduction workflow that I anticipated.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.