We invested a lot of work to make the analyses from the paper reproducible and we are very curious how the documentation could be improved and if people run into any problems.
I used a lot of different tools and strategies to make this paper easily reproducible at different levels. There's Docker container for the highest level of reproducibility, and package versions are managed with renv. The data used in the paper is hosted on Zenodo to avoid long queue times when downloading from the Climate Data Store and future-proof for when it goes away and checksumed before using it.
There are many applications to multi-MeV X-rays. Their penetrative properties make them good for scanning dense objects for industry, and their ionising properties can destroy tumours in radiotherapy. They are also around the energy of nuclear transitions, so they can trigger nuclear reactions to break down nuclear waste into medical isotopes, or to reveal smuggled nuclear-materials for port security. Laser-driven X-ray generation offers a compact and efficient way to create a bright source of X-rays, without having to construct a large synchrotron. To fully utilise this capability, work on optimising the target design and understanding the underlying X-ray mechanisms are essential. The hybrid-PIC code is in a unique position to model the full interaction, so its ease-of-use and reproducibility are crucial for this field to develop.
Most of the material is available through Jupyter notebooks in GitHub, and it should be easy to reproduce with the help of Binder. With the notebooks, you could experiment with different parameters to the ones analyzed in the paper. It also contains a large dataset of physical parameters of galaxies analysed in this work. We expect this work to be easily reproducible in the steps described in the repository.
I suggested a few papers last year. I’m hoping that we’ve improved our reproducibility with this one, this year. We’ve done our best to package it up both in Docker and as an R package. I’d be curious to know what the best way to reproduce it is found to be. Working through vignettes or spinning up a Docker instance. Which is the preferred method?
I tried hard to make this paper as reproducible as possible, but as techniques and dependencies become more complex, it is hard to make it 100% clear. Any form of feedback is more than welcome.
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.
This paper is reproduced weekly in a docker container on continuous integration, but it is also set up to work via local installs as well. It would be interesting to see if it's reproducible with a human operator who knows nothing of the project or toolchain.