Papers



Submit a Paper!

Browse ReproHack papers

  • Machine learning a model for RNA structure prediction

    Authors: Nicola Calonaci, Alisha Jones, Francesca Cuturello, Michael Sattler, Giovanni Bussi
    DOI: 10.1093/nargab/lqaa090
    Submitted by giovannibussi      

    Why should we attempt to reproduce this paper?

    The method is trained on the data that were available, but it is meant to be re-trainable as soon as new data are published. It would be great to be really sure that even someone else will be able to do it. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!

  • Automatic learning of hydrogen-bond fixes in an AMBER RNA force field

    Authors: Thorben Fröhlking, Vojtěch Mlýnský, Michal Janeček, Petra Kührová, Miroslav Krepl, Pavel Banáš, Jiří Šponer, Giovanni Bussi
    Submitted by giovannibussi      

    Why should we attempt to reproduce this paper?

    We do care about reproducibility. In case we receive any feedback, we would be really happy to improve our Github repository and/or submitted manuscript so as to make the reproduction easier!

  • Synergistic coupling in ab initio-machine learning simulations of dislocations

    Authors: Petr Grigorev, Alexandra M. Goryaeva, Mihai-Cosmin Marinica, James R. Kermode, Thomas D. Swinburnea
    DOI: https://arxiv.org/abs/2111.11262
    Submitted by jameskermode      

    Why should we attempt to reproduce this paper?

    Systematically improvable machine learning potentials could have a significant impact on the range of properties that can be modelled, but the toolchain associated with using them presents a barrier to entry for new users. Attempting to reproduce some of our results will help us improve the accessibility of the approach.

  • Sensitivity and dimensionality of atomic environment representations used for machine learning interatomic potentials

    Authors: Berk Onat, Christoph Ortner and James Kermode
    DOI: 10.1063/5.0016005
    Submitted by jameskermode      

    Why should we attempt to reproduce this paper?

    Popular descriptors for machine learning potentials such as the Behler-Parinello atom centred symmetry functions (ACSF) or the Smooth Overlap of Interatomic Potentials (SOAP) are widely used but so far not much attention has been paid to optimising how many descriptor components need to be included to give good results.

  • PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals

    Authors: Henry Charlesworth and Giovanni Montana
    Submitted by gmontana74      
      Mean reproducibility score:   10.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    This paper proposes a probabilistic planner that can solve goal-conditional tasks such as complex continuous control problems. The approach reaches state-of-the-art performance when compared to current deep reinforcement learning algorithms. However, the method relies on an ensemble of deep generative models and is computationally intensive. It would be interesting to reproduce the results presented in this paper on their robotic manipulation and navigation problems as these are very challenging problems that current reinforcement learning methods cannot easily solve (and when they do, they require a significantly larger number of experiences). Can the results be reproduced out-of-the-box with the provided code?

  • Optimizing the Use of Carbonate Standards to Minimize Uncertainties in Clumped Isotope Data

    Authors: Ilja J. Kocken, Inigo A. Müller, Martin Ziegler
    DOI: 10.1029/2019GC008545
    Submitted by japhir      

    Why should we attempt to reproduce this paper?

    Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!