This article used an open-source python repository for its analysis. It is well-suited for reproduction as more literature evolves on the intersection of urban planning and climate change. The adapted code is published alongside the article.
This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.
Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!