This paper presents a fine example of high-throughput computational materials screening studies, mainly focusing on the carbon nanoclusters of different sizes. In the paper, a set of diverse empirical and machine-learned interatomic potentials, which are commonly used to simulate carbonaceous materials, is benchmarked against the higher-level density functional theory (DFT) data, using a range of diverse structural features as the comparison criteria. Trying to reproduce the data presented here (even if you only consider a subset of the interaction potentials) will help you devise an understanding as to how you could approach a high-throughput structure prediction problem. Even though we concentrate here on isolated/finite nanoclusters, AIRSS (and other similar approaches like USPEX, CALYPSO, GMIN, etc.,) can also be used to predict crystal structures of different class of materials with applications in energy storage, catalysis, hydrogen storage, and so on.
DFT calculations are in principle reproducible between different codes, but differences can arise due to poor choice of convergence tolerances, inappropriate use of pseudopotentials and other numerical considerations. An independent validation of the key quantities needed to compute electrical conductivity would be valuable. In this case we have published our input files for calculating the four quantities needed to parametrise the transport simulations from which we compute the electrical conductivity. These are specifically electronic band structure, phonon dispersions, electron-phonon coupling constants and third derivatives of the force constants. Each in turn in more sensitive to convergence tolerances than the last, and it is the final quantity on which the conclusions of the paper critically depend. Reference output data is provided for comparison at the data URL below. We note that the pristine CNT results (dark red line) in figure 3 are an independent reproduction of earlier work and so we are confident the Boltzmann transport simulations are reproducible. The calculated inputs to these from DFT (in the case of Be encapsulation) have not been independently reproduced to our knowledge.
In theory, reproducing this paper should only require a clone of a public Git repository, and the execution of a Makefile (detailed in the README of the paper repository at https://github.com/psychoinformatics-de/paper-remodnav). We've set up our paper to be dynamically generated, retrieving and installing the relevant data and software automatically, and we've even created a tutorial about it, so that others can reuse the same setup for their work. Nevertheless, we've for example never tried it out across different operating systems - who knows whether it works on Windows? We'd love to share the tips and tricks we found to work, and even more love feedback on how to improve this further.
The current code is written in Torch, which is no longer actively maintained. Since deep learning in nanophotonics is an area of active interest (e.g. for the design of new metamaterials), it is important to update the code to use a more modern deep learning library such as tensorflow/keras
This paper is reproduced weekly in a docker container on continuous integration, but it is also set up to work via local installs as well. It would be interesting to see if it's reproducible with a human operator who knows nothing of the project or toolchain.
I believe this represents the only example of a reproducible paper from scattering data collected at Diamond Light Source (UK) and the Institute Laue-Langevin (France)