I tried hard to make it reproducible, so hopefully this paper can serve as an example on how reproducibility can be achieved. I think that being reproducible with only few commands to type in a terminal is quite an achievment. At least in my field, where I usually see code published along with paper, but with almost no documentation on how to rerun it.
Because: - Two fellow PhDs working on different topics have been able to reproduce some figures by following the README instructions and I hope this extends to other people - I've tried to incorporate as many of the best practices as possible to make my code and data open and accessible - I've tried to make sure that my data is exactly reproducible with the specified random seed strategy - the paper suggests a method that should be useful to other researchers in my field, which is not useful unless my results are reproducible
- This paper is a good example of a standard social science study that is (I hope!) fully reproducible, from main analysis, to supplementary analyses and figures. - I have not yet received any external feedback w.r.t. its reproducibility, so would be interested to see if I have overlooked any gaps in the reproduction workflow that I anticipated.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.
The original data took quite a while to produce for a previous paper, but for this paper, all tables and figures should be exactly reproducible by simply running the jupyter notebook.