Papers



Submit a Paper!

Browse ReproHack papers

  • Accelerating the prediction of large carbon clusters via structure search: Evaluation of machine-learning and classical potentials

    Authors: Bora Karasulu, Jean-Marc Leyssale, Patrick Rowe, Cedric Weber, Carla de Tomas
    DOI: 10.1016/j.carbon.2022.01.031
    Submitted by bkarasulu    
    Number of reviews:   1
    Why should we attempt to reproduce this paper?

    This paper presents a fine example of high-throughput computational materials screening studies, mainly focusing on the carbon nanoclusters of different sizes. In the paper, a set of diverse empirical and machine-learned interatomic potentials, which are commonly used to simulate carbonaceous materials, is benchmarked against the higher-level density functional theory (DFT) data, using a range of diverse structural features as the comparison criteria. Trying to reproduce the data presented here (even if you only consider a subset of the interaction potentials) will help you devise an understanding as to how you could approach a high-throughput structure prediction problem. Even though we concentrate here on isolated/finite nanoclusters, AIRSS (and other similar approaches like USPEX, CALYPSO, GMIN, etc.,) can also be used to predict crystal structures of different class of materials with applications in energy storage, catalysis, hydrogen storage, and so on.

  • Where should new parkrun events be located? Modelling the potential impact of 200 new events on socio-economic inequalities in access and participation

    Authors: Schneider PP, Smith RA, Bullas AM, Bayley T, Haake SS, Brennan A, Goyder E
    Submitted by hub-admin    
      Mean reproducibility score:   7.0/10   |   Number of reviews:   3
    Why should we attempt to reproduce this paper?

    If all went right, the analysis should be fully reproducible without the need to make any adjustments. The paper aims to find optimal locations for new parkruns, but we were not 100% sure how 'optimal' should be defined. We provide a few examples, but the code was meant to be flexible enough to allow potential decision makers to specify their own, alternative objectives. The spatial data set is also quite interesting and fun to play around with. Cave: The full analysis takes a while to run (~30+ min) and might require >= 8gb ram.

  • Open Trade Statistics

    Authors: PachĂĄ (Mauricio Vargas SepĂșlveda)
    Submitted by hub-admin    

    Why should we attempt to reproduce this paper?

    The focus of the project is reproducibility. Here we show the differences to access data compared to similar initiatives: https://ropensci.org/blog/2019/05/09/tradestatistics/. Also, similar projects have obscure parts, while our exposes the code from raw data downloading to dashboard creation.

    Tags: R Shiny