The clumped isotope proxy provides a way to reconstruct temperatures from the geologic past. The measurement is imprecise, however, due to the physics behind the measurement.
This paper tries to understand how we can improve measurement precision by changing which standards are used for the correction steps. It comes with R code to simulate various ways of standardizing clumped isotope measurements, and hopefully the source code in cited from Zenodo (and on GitHub) allows the full set of simulations and figure creation to be reproducible.
Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration.
I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!
I want to know if the code is easy to read and if the documentation/literate comments are enough to understand the thinking steps. How much of "basic R" knowledge can I assume, and how much should be explained? Should I focus on the ideas or also document the technical details of how to implement the simulations?