Some may argue that the field of machine learning is in a reproducibility crisis. It will be interesting to know how difficult it is for others to reproduce the results of a paper that proposed a quite complex methodology.
The current code is written in Torch, which is no longer actively maintained. Since deep learning in nanophotonics is an area of active interest (e.g. for the design of new metamaterials), it is important to update the code to use a more modern deep learning library such as tensorflow/keras
I suggested a few papers last year. I’m hoping that we’ve improved our reproducibility with this one, this year. We’ve done our best to package it up both in Docker and as an R package. I’d be curious to know what the best way to reproduce it is found to be. Working through vignettes or spinning up a Docker instance. Which is the preferred method?
It is kind of an easy reproducible code. It reads the data, makes few descriptive statistical analysis and plots figures using ggplot2.
Cleaning the databases used for this study was one of the most challenging aspects of it, so making it public is the best way to make the more out of it. We made an effort to document all analyses and data wrangling steps. We are interested to know if it is truly reproducible so that we can follow this same scheme for further projects, or adjust accordingly.
To use data from a manufacturing process: RTM for carbon composite production.To see if you can handle large amounts of data: the 36 k injection runs contain a total of 5 m frames. Maybe it is possible for you to reach our performance on smaller parts of the data, which would be great.
The paper describes pyKNEEr, a python package for open and reproducible research on femoral knee cartilage using Jupyter notebooks as a user interface. I created this paper with the specific intent to make both the workflows it describes and the paper itself open and reproducible, following guidelines from authorities in the field. Therefore, two things in the paper can be reproduced: 1) workflow results: Table 2 contains links to all the Jupyter notebooks used to calculate the results. Computations are long and might require a server, so if you want to run them locally, I recommend using only 2 or 3 images as inputs for the computations. Also, the paper should be sufficient, but if you need further introductory info, there are a documentation website: https://sbonaretti.github.io/pyKNEEr/ and a "how to" video: https://youtu.be/7WPf5KFtYi8 2) paper graphs: In the captions of figures 1, 4, and 5 you can find links to data repository, code (a Jupyter notebook), and the computational environment (binder) to fully reproduce the graph. These computations can be easily run locally and require a few seconds. All Jupyter notebooks automatically download data from Zenodo and provide dependencies, which should make reproducibility easier.
Paper and codes+data have been published 4 years ago, will they still work? I always try to release data and codes to reproduce my papers, but I seldom receive feedback. It would be useful to have comments from a reproducers' team, in order to improve sharing for future research (I switched from MATLAB to Python already).
This paper provides a novel approach to identifying oncogenes based on RNA overexpression in subsets of tumor relative to adjacent normal tissue. Showing that this study can be reproduced would aid other researchers who are attempting to identify oncogenes in other cancer types using the same methodology.
It'll a great helpful to independently check the scientific record I've published, so that errors, if there are any, could be corrected. Also, I will learn how to share the data in a more accessible to other if you could give me feedback.
I tried hard to make this paper as reproducible as possible, but as techniques and dependencies become more complex, it is hard to make it 100% clear. Any form of feedback is more than welcome.
Currently submitted paper on COVID19 on mental health. Unique clinical data (time series during the pandemic onset) & methods, hopefully fun to work on. Possibly too boring / easy to reproduce given my data & code? Not sure.
To see whether we did a good enough job in providing data and methods, and to check how the code has aged with respect to current libraries.
- This paper is a good example of a standard social science study that is (I hope!) fully reproducible, from main analysis, to supplementary analyses and figures. - I have not yet received any external feedback w.r.t. its reproducibility, so would be interested to see if I have overlooked any gaps in the reproduction workflow that I anticipated.
If all went right, the analysis should be fully reproducible without the need to make any adjustments. The paper aims to find optimal locations for new parkruns, but we were not 100% sure how 'optimal' should be defined. We provide a few examples, but the code was meant to be flexible enough to allow potential decision makers to specify their own, alternative objectives. The spatial data set is also quite interesting and fun to play around with. Cave: The full analysis takes a while to run (~30+ min) and might require >= 8gb ram.
Open data and reproducibility was important in this project.
It is a rare find of full reproducibility in the field of plant disease epidemiology.
Low Energy Electron Microscopy (LEEM) is a somewhat specific form of electron microscopy used to study surfaces and 2D materials. In this paper we describe a set of data processing techniques applied to LEEM and adapted to the peculiarities of LEEM. This is combined with a parallelized Python implementation using Dask in separate notebooks. So if you are interested in microscopy, image analysis, clustering of experimental physics data or parallel Python, this paper should be interesting to you.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.
We propose a simple method to retrieve optical constants from single optical transmittance measurements, in particular in the fundamental absorption region. The construction of needed envelopes is arbitrary and will depend on the user. However, the method should still be robust and deliver similar results.