In this paper, an R package was used to improve the reproducibility of the analyses. Therefore, it would be good to know to what extent this works. The R package includes the following analyses: (1) data trimming and preparation, (2) descriptive statistics, (3) reliability and correlations, (4) t-tests and Bayesian t-tests, (5) latent-change models (structural equation modeling approach), and (6) multiverse analyses. Furthermore, all deidentified data, experiment codes, research materials, and results are publicly accessible on the Open Science Framework (OSF) at https://osf.io/ngfxv. The study’s design and the analyses were pre-registered on OSF. The preregistration can be accessed at https://osf.io/ tywu7.
The method is trained on the data that were available, but it is meant to be re-trainable as soon as new data are published. It would be great to be really sure that even someone else will be able to do it. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!
Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!