Papers



Submit a Paper!

Browse ReproHack papers

  • Accelerating the prediction of large carbon clusters via structure search: Evaluation of machine-learning and classical potentials

    Authors: Bora Karasulu, Jean-Marc Leyssale, Patrick Rowe, Cedric Weber, Carla de Tomas
    DOI: 10.1016/j.carbon.2022.01.031
    Submitted by bkarasulu    
    Number of reviews:   1
    Why should we attempt to reproduce this paper?

    This paper presents a fine example of high-throughput computational materials screening studies, mainly focusing on the carbon nanoclusters of different sizes. In the paper, a set of diverse empirical and machine-learned interatomic potentials, which are commonly used to simulate carbonaceous materials, is benchmarked against the higher-level density functional theory (DFT) data, using a range of diverse structural features as the comparison criteria. Trying to reproduce the data presented here (even if you only consider a subset of the interaction potentials) will help you devise an understanding as to how you could approach a high-throughput structure prediction problem. Even though we concentrate here on isolated/finite nanoclusters, AIRSS (and other similar approaches like USPEX, CALYPSO, GMIN, etc.,) can also be used to predict crystal structures of different class of materials with applications in energy storage, catalysis, hydrogen storage, and so on.

  • Optimizing the Use of Carbonate Standards to Minimize Uncertainties in Clumped Isotope Data

    Authors: Ilja J. Kocken, Inigo A. Müller, Martin Ziegler
    DOI: 10.1029/2019GC008545
    Submitted by japhir      

    Why should we attempt to reproduce this paper?

    Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!

  • The viewing angle in AGN SED models, a data-driven analysis

    Authors: Andrés Felipe Ramos Padilla, Lingyu Wang, Katarzyna Małek, Andreas Efstathiou, Guang Yang
    Submitted by aframosp    
      Mean reproducibility score:   9.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    Most of the material is available through Jupyter notebooks in GitHub, and it should be easy to reproduce with the help of Binder. With the notebooks, you could experiment with different parameters to the ones analyzed in the paper. It also contains a large dataset of physical parameters of galaxies analysed in this work. We expect this work to be easily reproducible in the steps described in the repository.

  • Dynamic redistribution of plasticity in a cerebellar spiking neural network reproducing an associative learning task perturbed by TMS

    Authors: Alberto Antonietti, Jessica Monaco, Egidio D'Angelo, Alessandra Pedrocchi, and Claudia Casellato
    Submitted by @_Aalph    

    Why should we attempt to reproduce this paper?

    Paper and codes+data have been published 4 years ago, will they still work? I always try to release data and codes to reproduce my papers, but I seldom receive feedback. It would be useful to have comments from a reproducers' team, in order to improve sharing for future research (I switched from MATLAB to Python already).

  • Hyperparameter importance Across Datasets

    Authors: Jan N van Rijn and Frank Hutter
    DOI: 10.1145/3219819.3220058
    Submitted by hub-admin    
      Mean reproducibility score:   7.0/10   |   Number of reviews:   1
    Why should we attempt to reproduce this paper?

    I tried hard to make this paper as reproducible as possible, but as techniques and dependencies become more complex, it is hard to make it 100% clear. Any form of feedback is more than welcome.

Search for papers

Filter by tags

Python R GDAL GEOS GIS Shiny PROJ Galaxies Astronomy HPC Databases Binder Social Science Stata make Computer Science Jupyter Notebook tidyverse emacs literate earth sciences clumped isotopes org-mode geology eyetracking LaTeX Git ArcGIS Docker Drake SVN knitr C Matlab Mathematica Meta-analysis swig miniconda tensorflow keras Pandas SQL neuroscience robotics deep learning planner reiforcement learning Plasma physics Hybrid-PIC EPOCH Laser Gamma-ray X-ray radiation Petawatt Fortran plasma PIC physics Monte Carlo Atomistic Simulation LAMMPS Electron Transport DFT descriptors interatomic potentials machine learning Molecular Dynamics Python scripting AIRSS structure prediction density functional theory high-throughput machine-learning RNA bioinformatics CFD Fluid Dynamics OpenFOAM C++ DNS Mathematics Droplets Basilisk Particle-In-Cell psychology Stan Finance SAS Replication crisis Economics Malaria consumer behavior number estimation mental arithmetic psychophysics Archaeology Precipitation Epidemiology Parkrun Health Health Economics HTA plumber science of science Zipf networks city size distribution urbanism literature review Preference Visual Questionnaire Mann-Whitney Correlation Conceptual replication Cognitive psychology Multinomial processing tree (MPT) modeling #urbanism #R k-means cluster analysis city-regions Urban Knowledge Systems Topic modelling Planning Support Systems Software Citation Quarto snakemake Numerical modelling Ocean climate physical oceanography apptainer oceanography All tags Clear tags

Key

  Associated with an event
  Available for general review
  Public reviews welcome