I hope that the evaluation framework introduced in the paper can become used by other researchers working on mutational signatures.
We invested a lot of work to make the analyses from the paper reproducible and we are very curious how the documentation could be improved and if people run into any problems.
In this paper, an R package was used to improve the reproducibility of the analyses. Therefore, it would be good to know to what extent this works. The R package includes the following analyses: (1) data trimming and preparation, (2) descriptive statistics, (3) reliability and correlations, (4) t-tests and Bayesian t-tests, (5) latent-change models (structural equation modeling approach), and (6) multiverse analyses. Furthermore, all deidentified data, experiment codes, research materials, and results are publicly accessible on the Open Science Framework (OSF) at https://osf.io/ngfxv. The study’s design and the analyses were pre-registered on OSF. The preregistration can be accessed at https://osf.io/ tywu7.
The method is trained on the data that were available, but it is meant to be re-trainable as soon as new data are published. It would be great to be really sure that even someone else will be able to do it. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!
In theory, reproducing this paper should only require a clone of a public Git repository, and the execution of a Makefile (detailed in the README of the paper repository at https://github.com/psychoinformatics-de/paper-remodnav). We've set up our paper to be dynamically generated, retrieving and installing the relevant data and software automatically, and we've even created a tutorial about it, so that others can reuse the same setup for their work. Nevertheless, we've for example never tried it out across different operating systems - who knows whether it works on Windows? We'd love to share the tips and tricks we found to work, and even more love feedback on how to improve this further.
Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!