-
Authors: Angela I. Renton, Thuy T. Dao, Tom Johnstone, Oren Civier, Ryan P. Sullivan, David J. White, Paris Lyons, Benjamin M. Slade, David F. Abbott, Toluwani J. Amos, Saskia Bollmann, Andy Botting, Megan E. J. Campbell, Jeryn Chang, Thomas G. Close, Monika Dörig, Korbinian Eckstein, Gary F. Egan, Stefanie Evas, Guillaume Flandin, Kelly G. Garner, Marta I. Garrido, Satrajit S. Ghosh, Martin Grignard, Yaroslav O. Halchenko, Anthony J. Hannan, Anibal S. Heinsfeld, Laurentius Huber, Matthew E. Hughes, Jakub R. Kaczmarzyk, Lars Kasper, Levin Kuhlmann, Kexin Lou, Yorguin-Jose Mantilla-Ramos, Jason B. Mattingley, Michael L. Meier, Jo Morris, Akshaiy Narayanan, Franco Pestilli, Aina Puce, Fernanda L. Ribeiro, Nigel C. Rogasch, Chris Rorden, Mark M. Schira, Thomas B. Shaw, Paul F. Sowman, Gershon Spitz, Ashley W. Stewart, Xincheng Ye, Judy D. Zhu, Aswin Narayanan & Steffen Bollmann
Mean reproducibility score:
2.5/10
|
Number of reviews:
2
Why should we attempt to reproduce this paper?
We invested a lot of work to make the analyses from the paper reproducible and we are very curious how the documentation could be improved and if people run into any problems.
-
Authors: Elio Campitelli, Leandro Díaz, Carolina Vera
Mean reproducibility score:
1.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
I used a lot of different tools and strategies to make this paper easily reproducible at different levels. There's Docker container for the highest level of reproducibility, and package versions are managed with renv. The data used in the paper is hosted on Zenodo to avoid long queue times when downloading from the Climate Data Store and future-proof for when it goes away and checksumed before using it.
-
Authors: Romain Caneill
Fabien Roquet
Gurvan Madec
Jonas Nycander
Mean reproducibility score:
0.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
I tried hard to make it reproducible, so hopefully this paper can serve as an example on how reproducibility can be achieved.
I think that being reproducible with only few commands to type in a terminal is quite an achievment. At least in my field, where I usually see code published along with paper, but with almost no documentation on how to rerun it.
-
Authors: Robert A. Smith, Paul P. Schneider, Alice Bullas, Steve Haake, Helen Quirk, Rami Cosulich1, Elizabeth Goyder
Mean reproducibility score:
9.2/10
|
Number of reviews:
5
Why should we attempt to reproduce this paper?
The code and data are both on GitHub. The paper has been published in Wellcome Open Research and has been replicated by multiple other authors.
-
Authors: Khaliq, I., Fanning, J., Melloy, P. et al.
Why should we attempt to reproduce this paper?
I suggested a few papers last year. I’m hoping that we’ve improved our reproducibility with this one, this year. We’ve done our best to package it up both in Docker and as an R package. I’d be curious to know what the best way to reproduce it is found to be. Working through vignettes or spinning up a Docker instance. Which is the preferred method?
-
Authors: Atsushi Ebihara, Joel H. Nitta, Yurika Matsumoto, Yuri Fukazawa, Marie Kurihara, Hitomi Yokote, Kaoru Sakuma, Otowa Azakami, Yumiko Hirayama, Ryoko Imaichi
Mean reproducibility score:
10.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.
-
Authors: Kamvar ZN, Amaradasa BS, Jhala R, McCoy S, Steadman JR, Everhart SE
Mean reproducibility score:
6.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
This paper is reproduced weekly in a docker container on continuous integration, but it is also set up to work via local installs as well. It would be interesting to see if it's reproducible with a human operator who knows nothing of the project or toolchain.