This paper is fully reproducible; we provide the protocol that the different modelers used, the data produced from these models, the observed data, and the code to run the analysis that led to the results of the paper, figures, and text. I have not come across any other paper in forestry that is as fully reproducible as our paper, so it might also be a rare example in this field and hopefully a motivation to others to do so. Please notice that we do not provide the models that were used to run the simulations, as these are the results used (or data collection), but we do provide the data resulting from these simulations.
This paper proposes a probabilistic planner that can solve goal-conditional tasks such as complex continuous control problems. The approach reaches state-of-the-art performance when compared to current deep reinforcement learning algorithms. However, the method relies on an ensemble of deep generative models and is computationally intensive. It would be interesting to reproduce the results presented in this paper on their robotic manipulation and navigation problems as these are very challenging problems that current reinforcement learning methods cannot easily solve (and when they do, they require a significantly larger number of experiences). Can the results be reproduced out-of-the-box with the provided code?
The current code is written in Torch, which is no longer actively maintained. Since deep learning in nanophotonics is an area of active interest (e.g. for the design of new metamaterials), it is important to update the code to use a more modern deep learning library such as tensorflow/keras