here()
will not work);manuscript.Rmd
, as prompted in the description here: https://osf.io/tfd8z/;manuscript.Rmd
);install.packages()
won't work). Then, we had to dig up an older version (installed with devtools::install_github("crsh/papaja@1999ba3")
to actually manage to knit to pdf;Our group had different degrees of familiarity with R and R markdown. Users with less familiarity felt that a more descriptive README file would have been useful to identify where to start, how the repository was organised, and so on.
We also used a laptop with a rolling Ubuntu-based Linux distribution with a KDE Neon desktop environment, and two university-managed Windows 10 laptops. We needed to install/update R and a variety of R packages, which ended up being somewhat frustrating. The {papaja} and {meta} packages gave us the biggest headache, especially {papaja} which was not on CRAN at the moment of our ReproHacking (2021-11-18).
R and RStudio, a variety of R packages (hard dependencies), pandoc/LaTeX to knit to pdf.
The main challenges we found were: - Sorting out dependencies. This was the main issue; - Understanding how the material was organised: we felt that having a more descriptive/comprehensive README file would have simplified out effort; - We couldn't test the CodeOcean stuff without subscribing first; - It would have been useful if the Dockerfile from the CodeOcean page had been included in the Data&Analysis repository, as it fully specifies the dependencies of the project.
Once every dependency (and paths) were sorted out, we could just run the analysis and knit the pdf. That was super easy.
The structure of the data&analysis repository was very good. Once we sorted out dependencies, it was super easy for us to reproduce all the results of the paper, so good job for that! In just a couple of hours of work we could sort everything out and achieve all of the above.
Some further thoughts, which we felt would have simplified our effort ever more:
Each single R script is very well documented, actually, so we really liked that. However, at a "macro" level, it was hard for us to tie together the different components/scripts of this project if we wanted to dig more deeply into the various aspects. Again, please refer to our previous comments on a global README file.
We didn't really have a lot of time to go through the code and methods, but we felt that everything was ok. All R code was in separate scripts, so we did not really have to check that to recompile the document.
The MIT license did not have copyright year or holder. We're not sure how this affect the possibility to re-use code.
We hope our comments don't come across as harsh, we think the authors did a good job to structure the project and ensure the results and manuscript are fully reproducible. There were no obvious shortcomings, so we focussed on the details! :) Thanks for sharing this paper with the ReproHack community, this is a nice "meta" example on reproducibility, and we enjoyed working on it.