Papers



Submit a Paper!

Browse ReproHack papers

  • Neurodesk: an accessible, flexible and portable data analysis environment for reproducible neuroimaging

    Authors: Angela I. Renton, Thuy T. Dao, Tom Johnstone, Oren Civier, Ryan P. Sullivan, David J. White, Paris Lyons, Benjamin M. Slade, David F. Abbott, Toluwani J. Amos, Saskia Bollmann, Andy Botting, Megan E. J. Campbell, Jeryn Chang, Thomas G. Close, Monika Dörig, Korbinian Eckstein, Gary F. Egan, Stefanie Evas, Guillaume Flandin, Kelly G. Garner, Marta I. Garrido, Satrajit S. Ghosh, Martin Grignard, Yaroslav O. Halchenko, Anthony J. Hannan, Anibal S. Heinsfeld, Laurentius Huber, Matthew E. Hughes, Jakub R. Kaczmarzyk, Lars Kasper, Levin Kuhlmann, Kexin Lou, Yorguin-Jose Mantilla-Ramos, Jason B. Mattingley, Michael L. Meier, Jo Morris, Akshaiy Narayanan, Franco Pestilli, Aina Puce, Fernanda L. Ribeiro, Nigel C. Rogasch, Chris Rorden, Mark M. Schira, Thomas B. Shaw, Paul F. Sowman, Gershon Spitz, Ashley W. Stewart, Xincheng Ye, Judy D. Zhu, Aswin Narayanan & Steffen Bollmann
    DOI: https://doi.org/10.1038/s41592-023-02145-x
    Submitted by sbollmann    
      Mean reproducibility score:   2.5/10   |   Number of reviews:   2
    Why should we attempt to reproduce this paper?

    We invested a lot of work to make the analyses from the paper reproducible and we are very curious how the documentation could be improved and if people run into any problems.

  • Accelerating the prediction of large carbon clusters via structure search: Evaluation of machine-learning and classical potentials

    Authors: Bora Karasulu, Jean-Marc Leyssale, Patrick Rowe, Cedric Weber, Carla de Tomas
    DOI: 10.1016/j.carbon.2022.01.031
    Submitted by bkarasulu    
    Number of reviews:   1
    Why should we attempt to reproduce this paper?

    This paper presents a fine example of high-throughput computational materials screening studies, mainly focusing on the carbon nanoclusters of different sizes. In the paper, a set of diverse empirical and machine-learned interatomic potentials, which are commonly used to simulate carbonaceous materials, is benchmarked against the higher-level density functional theory (DFT) data, using a range of diverse structural features as the comparison criteria. Trying to reproduce the data presented here (even if you only consider a subset of the interaction potentials) will help you devise an understanding as to how you could approach a high-throughput structure prediction problem. Even though we concentrate here on isolated/finite nanoclusters, AIRSS (and other similar approaches like USPEX, CALYPSO, GMIN, etc.,) can also be used to predict crystal structures of different class of materials with applications in energy storage, catalysis, hydrogen storage, and so on.

  • New Insight into the Stability of CaCO3 Surfaces and Nanoparticles via Molecular Simulation

    Authors: A. Matthew Bano, P. Mark Rodger, and David Quigley
    DOI: 10.1021/la501409j
    Submitted by dquigley      

    Why should we attempt to reproduce this paper?

    The negative surface enthalpies in figure 5 are surprising. At least one group has attempted to reproduce these using a different code and obtained positive enthalpies. This was attributed to the inability of that code to independently relax the three simulation cell vectors resulting in an unphysical water density. This demonstrates how sensitive these results can be to the particular implementation of simulation algorithms in different codes. Similarly the force field used is now very popular. Its functional form and full set of parameters can be found in the literature. However differences in how different simulation codes implement truncation, electrostatics etc can lead to significant difference in results such as these. It would be a valuable exercise to establish if exactly the same force field as that used here can be reproduced from only its specification in the literature. The interfacial energies of interest should be reproducible with simulations on modest numbers of processors (a few dozen) with run times for each being 1-2 days. Each surface is an independent calculation and so these can be run concurrently during the ReproHack.

  • REMoDNaV: robust eye-movement classification for dynamic stimulation

    Authors: Asim H. Dar, Adina S. Wagner, Michael Hanke
    DOI: https://doi.org/10.3758/s13428-020-01428-x
    Submitted by adswa    
      Mean reproducibility score:   7.6/10   |   Number of reviews:   5
    Why should we attempt to reproduce this paper?

    In theory, reproducing this paper should only require a clone of a public Git repository, and the execution of a Makefile (detailed in the README of the paper repository at https://github.com/psychoinformatics-de/paper-remodnav). We've set up our paper to be dynamically generated, retrieving and installing the relevant data and software automatically, and we've even created a tutorial about it, so that others can reuse the same setup for their work. Nevertheless, we've for example never tried it out across different operating systems - who knows whether it works on Windows? We'd love to share the tips and tricks we found to work, and even more love feedback on how to improve this further.

  • Optimizing the Use of Carbonate Standards to Minimize Uncertainties in Clumped Isotope Data

    Authors: Ilja J. Kocken, Inigo A. Müller, Martin Ziegler
    DOI: 10.1029/2019GC008545
    Submitted by japhir      

    Why should we attempt to reproduce this paper?

    Even though the approach in the paper focuses on a specific measurement (clumped isotopes) and how to optimize which and how many standards we use, I hope that the problem is general enough that insight can translate to any kind of measurement that relies on machine calibration. I've committed to writing a literate program (plain text interspersed with code chunks) to explain what is going on and to make the simulations one step at a time. I really hope that this is understandable to future collaborators and scientists in my field, but I have not had any code review internally and I also didn't receive any feedback on it from the reviewers. I would love to see if what in my mind represents "reproducible code" is actually reproducible, and to learn what I can improve for future projects!

Search for papers

Filter by tags

Python R GDAL GEOS GIS Shiny PROJ Galaxies Astronomy HPC Databases Binder Social Science Stata make Computer Science Jupyter Notebook tidyverse emacs literate earth sciences clumped isotopes org-mode geology eyetracking LaTeX Git ArcGIS Docker Drake SVN knitr C Matlab Mathematica Meta-analysis swig miniconda tensorflow keras Pandas SQL neuroscience robotics deep learning planner reiforcement learning Plasma physics Hybrid-PIC EPOCH Laser Gamma-ray X-ray radiation Petawatt Fortran plasma PIC physics Monte Carlo Atomistic Simulation LAMMPS Electron Transport DFT descriptors interatomic potentials machine learning Molecular Dynamics Python scripting AIRSS structure prediction density functional theory high-throughput machine-learning RNA bioinformatics CFD Fluid Dynamics OpenFOAM C++ DNS Mathematics Droplets Basilisk Particle-In-Cell psychology Stan Finance SAS Replication crisis Economics Malaria consumer behavior number estimation mental arithmetic psychophysics Archaeology Precipitation Epidemiology Parkrun Health Health Economics HTA plumber science of science Zipf networks city size distribution urbanism literature review Preference Visual Questionnaire Mann-Whitney Correlation Conceptual replication Cognitive psychology Multinomial processing tree (MPT) modeling #urbanism #R k-means cluster analysis city-regions Urban Knowledge Systems Topic modelling Planning Support Systems Software Citation Quarto snakemake Numerical modelling Ocean climate physical oceanography apptainer oceanography R package structural equation modeling bayes factor Forest Simulations Models of forest dynamics multi-lab study mice mechanics growth Tissue Cells Clustering Expectation-Maximization bootstrapping R software Position Weight Matrices singularity neuroimaging effect size biology replicability cancer reproducibility csv osf preclinical research genomics All tags Clear tags

Key

  Associated with an event
  Available for general review
  Public reviews welcome