This paper presents a fine example of high-throughput computational materials screening studies, mainly focusing on the carbon nanoclusters of different sizes. In the paper, a set of diverse empirical and machine-learned interatomic potentials, which are commonly used to simulate carbonaceous materials, is benchmarked against the higher-level density functional theory (DFT) data, using a range of diverse structural features as the comparison criteria. Trying to reproduce the data presented here (even if you only consider a subset of the interaction potentials) will help you devise an understanding as to how you could approach a high-throughput structure prediction problem. Even though we concentrate here on isolated/finite nanoclusters, AIRSS (and other similar approaches like USPEX, CALYPSO, GMIN, etc.,) can also be used to predict crystal structures of different class of materials with applications in energy storage, catalysis, hydrogen storage, and so on.
There are many applications to multi-MeV X-rays. Their penetrative properties make them good for scanning dense objects for industry, and their ionising properties can destroy tumours in radiotherapy. They are also around the energy of nuclear transitions, so they can trigger nuclear reactions to break down nuclear waste into medical isotopes, or to reveal smuggled nuclear-materials for port security. Laser-driven X-ray generation offers a compact and efficient way to create a bright source of X-rays, without having to construct a large synchrotron. To fully utilise this capability, work on optimising the target design and understanding the underlying X-ray mechanisms are essential. The hybrid-PIC code is in a unique position to model the full interaction, so its ease-of-use and reproducibility are crucial for this field to develop.
This paper proposes a probabilistic planner that can solve goal-conditional tasks such as complex continuous control problems. The approach reaches state-of-the-art performance when compared to current deep reinforcement learning algorithms. However, the method relies on an ensemble of deep generative models and is computationally intensive. It would be interesting to reproduce the results presented in this paper on their robotic manipulation and navigation problems as these are very challenging problems that current reinforcement learning methods cannot easily solve (and when they do, they require a significantly larger number of experiences). Can the results be reproduced out-of-the-box with the provided code?