During this Reproducibility Hackathon (ReproHack) held at the Swiss Reproducibility Conference 2024, participants will attempt to reproduce results from published research papers with openly available code and data. The goals of the event are to:
There will also be an informal apero with pizza and drinks after the event.
The event is open to participants with varying levels of expertise in computational reproducibility, although some basic familiarity with research code and data is recommended. Participants can customize the challenge by selecting a paper that matches their skills (programming language, type of analysis, etc.). To register for the ReproHack, register for the Swiss Reproducibility Conference 2024 and sign up for the ReproHack during the registration process.
The ReproHack is in no way an attempt to criticize or discredit the research articles we try to reproduce. We see reproduction as a beneficial scientific activity in its own right, with useful results for authors and valuable learning experiences for participants and the research community as a whole.
Please bring your laptop along with a charger to the event. You can either choose from a list of papers or propose a paper yourself (the paper should have publicly available code and data). We will use the ReproHack Hub platform for submission and review of the papers. If you want to propose a paper, please submit the paper to the ReproHack Hub and associate it with this event during submission.
By joining our event, you agree to abide by our Code of Conduct
Time | Event |
---|---|
13:45 | Welcome and Orientation |
14:00 | Ice Breaker Session in Groups |
14:15 | Select Papers, Team Formation |
14:30 | Round I of ReproHacking |
15:15 | Coffee Break |
15:30 | Round II of ReproHacking |
16:30 | Re-group and sharing of experiences |
16:50 | Feedback and Closing |
In this paper, an R package was used to improve the reproducibility of the analyses. Therefore, it would be good to know to what extent this works. The R package includes the following analyses: (1) data trimming and preparation, (2) descriptive statistics, (3) reliability and correlations, (4) t-tests and Bayesian t-tests, (5) latent-change models (structural equation modeling approach), and (6) multiverse analyses. Furthermore, all deidentified data, experiment codes, research materials, and results are publicly accessible on the Open Science Framework (OSF) at https://osf.io/ngfxv. The study’s design and the analyses were pre-registered on OSF. The preregistration can be accessed at https://osf.io/ tywu7.
This paper is fully reproducible; we provide the protocol that the different modelers used, the data produced from these models, the observed data, and the code to run the analysis that led to the results of the paper, figures, and text. I have not come across any other paper in forestry that is as fully reproducible as our paper, so it might also be a rare example in this field and hopefully a motivation to others to do so. Please notice that we do not provide the models that were used to run the simulations, as these are the results used (or data collection), but we do provide the data resulting from these simulations.
This is an example of a study with an analysis going beyond a simple t-test; yet the entire analysis is based on one comprehensive and manageable data set privided with the publication.
The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field.