This paper proposes a probabilistic planner that can solve goal-conditional tasks such as complex continuous control problems. The approach reaches state-of-the-art performance when compared to current deep reinforcement learning algorithms. However, the method relies on an ensemble of deep generative models and is computationally intensive. It would be interesting to reproduce the results presented in this paper on their robotic manipulation and navigation problems as these are very challenging problems that current reinforcement learning methods cannot easily solve (and when they do, they require a significantly larger number of experiences). Can the results be reproduced out-of-the-box with the provided code?
The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field. Therefore, two things in the paper can be reproduced: 1) workflow results: Table 2 contains links to all the Jupyter notebooks used to calculate the results. Computations are long and might require a server, so if you want to run them locally, I recommend using only 2 or 3 images as inputs for the computations. Also, the paper should be sufficient, but if you need further introductory info, there are a documentation website: https://sbonaretti.github.io/pyKNEEr/ and a "how to" video: https://youtu.be/7WPf5KFtYi8 2) paper graphs: In the captions of figures 1, 4, and 5 you can find links to data repository, code (a Jupyter notebook), and the computational environment (binder) to fully reproduce the graph. These computations can be easily run locally and require a few seconds. All Jupyter notebooks automatically download data from Zenodo and provide dependencies, which should make reproducibility easier.