I hope that the evaluation framework introduced in the paper can become used by other researchers working on mutational signatures.
The method is trained on the data that were available, but it is meant to be re-trainable as soon as new data are published. It would be great to be really sure that even someone else will be able to do it. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!
We do care about reproducibility. In case we receive any feedback, we would be really happy to improve our Github repository and/or submitted manuscript so as to make the reproduction easier!
We do care about reproducibility. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!
Because: - Two fellow PhDs working on different topics have been able to reproduce some figures by following the README instructions and I hope this extends to other people - I've tried to incorporate as many of the best practices as possible to make my code and data open and accessible - I've tried to make sure that my data is exactly reproducible with the specified random seed strategy - the paper suggests a method that should be useful to other researchers in my field, which is not useful unless my results are reproducible
The original data took quite a while to produce for a previous paper, but for this paper, all tables and figures should be exactly reproducible by simply running the jupyter notebook.
It uses the drake R package that should make reproducibility of R projects much easier (just run make.R and you're done). However, it does depend on very specific package versions, which are provided by the accompanying docker image.