This paper presents a fine example of high-throughput computational materials screening studies, mainly focusing on the carbon nanoclusters of different sizes. In the paper, a set of diverse empirical and machine-learned interatomic potentials, which are commonly used to simulate carbonaceous materials, is benchmarked against the higher-level density functional theory (DFT) data, using a range of diverse structural features as the comparison criteria. Trying to reproduce the data presented here (even if you only consider a subset of the interaction potentials) will help you devise an understanding as to how you could approach a high-throughput structure prediction problem. Even though we concentrate here on isolated/finite nanoclusters, AIRSS (and other similar approaches like USPEX, CALYPSO, GMIN, etc.,) can also be used to predict crystal structures of different class of materials with applications in energy storage, catalysis, hydrogen storage, and so on.
The negative surface enthalpies in figure 5 are surprising. At least one group has attempted to reproduce these using a different code and obtained positive enthalpies. This was attributed to the inability of that code to independently relax the three simulation cell vectors resulting in an unphysical water density. This demonstrates how sensitive these results can be to the particular implementation of simulation algorithms in different codes. Similarly the force field used is now very popular. Its functional form and full set of parameters can be found in the literature. However differences in how different simulation codes implement truncation, electrostatics etc can lead to significant difference in results such as these. It would be a valuable exercise to establish if exactly the same force field as that used here can be reproduced from only its specification in the literature. The interfacial energies of interest should be reproducible with simulations on modest numbers of processors (a few dozen) with run times for each being 1-2 days. Each surface is an independent calculation and so these can be run concurrently during the ReproHack.
Because: - Two fellow PhDs working on different topics have been able to reproduce some figures by following the README instructions and I hope this extends to other people - I've tried to incorporate as many of the best practices as possible to make my code and data open and accessible - I've tried to make sure that my data is exactly reproducible with the specified random seed strategy - the paper suggests a method that should be useful to other researchers in my field, which is not useful unless my results are reproducible
The original data took quite a while to produce for a previous paper, but for this paper, all tables and figures should be exactly reproducible by simply running the jupyter notebook.