The results of this paper have been used in multiple subsequent studies as a benchmark against which other methods of performing the same calculation have been tested. Other groups have challenged the results as suffering from finite size effects, in particular the calculations on mixtures of cubic and hexagonal ice. Should there be time during in the event, participants could check this by performing calculations on larger unit cells. Each individual calculation should converge adequately within 96 hours making it amenable to a HPC ReproHack. Given modern HPC hardware many such calculations could be run concurrently on a single HPC node.
Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!