I hope that the evaluation framework introduced in the paper can become used by other researchers working on mutational signatures.
This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.
The method is trained on the data that were available, but it is meant to be re-trainable as soon as new data are published. It would be great to be really sure that even someone else will be able to do it. In case we receive any feedback, we would be really happy to improve our Github repository so as to make the reproduction easier!