-
Authors: Matúš Medo, Michaela Medová
Submitted by
8medom
Why should we attempt to reproduce this paper?
I hope that the evaluation framework introduced in the paper can become used by other researchers working on mutational signatures.
-
Authors: Angela I. Renton, Thuy T. Dao, Tom Johnstone, Oren Civier, Ryan P. Sullivan, David J. White, Paris Lyons, Benjamin M. Slade, David F. Abbott, Toluwani J. Amos, Saskia Bollmann, Andy Botting, Megan E. J. Campbell, Jeryn Chang, Thomas G. Close, Monika Dörig, Korbinian Eckstein, Gary F. Egan, Stefanie Evas, Guillaume Flandin, Kelly G. Garner, Marta I. Garrido, Satrajit S. Ghosh, Martin Grignard, Yaroslav O. Halchenko, Anthony J. Hannan, Anibal S. Heinsfeld, Laurentius Huber, Matthew E. Hughes, Jakub R. Kaczmarzyk, Lars Kasper, Levin Kuhlmann, Kexin Lou, Yorguin-Jose Mantilla-Ramos, Jason B. Mattingley, Michael L. Meier, Jo Morris, Akshaiy Narayanan, Franco Pestilli, Aina Puce, Fernanda L. Ribeiro, Nigel C. Rogasch, Chris Rorden, Mark M. Schira, Thomas B. Shaw, Paul F. Sowman, Gershon Spitz, Ashley W. Stewart, Xincheng Ye, Judy D. Zhu, Aswin Narayanan & Steffen Bollmann
Mean reproducibility score:
2.5/10
|
Number of reviews:
2
Why should we attempt to reproduce this paper?
We invested a lot of work to make the analyses from the paper reproducible and we are very curious how the documentation could be improved and if people run into any problems.
-
Authors: Elio Campitelli, Leandro Díaz, Carolina Vera
Mean reproducibility score:
1.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
I used a lot of different tools and strategies to make this paper easily reproducible at different levels. There's Docker container for the highest level of reproducibility, and package versions are managed with renv. The data used in the paper is hosted on Zenodo to avoid long queue times when downloading from the Climate Data Store and future-proof for when it goes away and checksumed before using it.
-
Authors: Nicola Calonaci, Alisha Jones, Francesca Cuturello, Michael Sattler, Giovanni Bussi
Why should we attempt to reproduce this paper?
The method is trained on the data that were available, but it is meant to be re-trainable as soon as new data are published. It would be great to be really sure that even someone else will be able to do it. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!
-
Authors: Khaliq, I., Fanning, J., Melloy, P. et al.
Why should we attempt to reproduce this paper?
I suggested a few papers last year. I’m hoping that we’ve improved our reproducibility with this one, this year. We’ve done our best to package it up both in Docker and as an R package. I’d be curious to know what the best way to reproduce it is found to be. Working through vignettes or spinning up a Docker instance. Which is the preferred method?
-
Authors: Atsushi Ebihara, Joel H. Nitta, Yurika Matsumoto, Yuri Fukazawa, Marie Kurihara, Hitomi Yokote, Kaoru Sakuma, Otowa Azakami, Yumiko Hirayama, Ryoko Imaichi
Mean reproducibility score:
10.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.
-
Authors: Kamvar ZN, Amaradasa BS, Jhala R, McCoy S, Steadman JR, Everhart SE
Mean reproducibility score:
6.0/10
|
Number of reviews:
1
Why should we attempt to reproduce this paper?
This paper is reproduced weekly in a docker container on continuous integration, but it is also set up to work via local installs as well. It would be interesting to see if it's reproducible with a human operator who knows nothing of the project or toolchain.