What do analyses of city size distributions have in common?


Review this paper

Submitted by clementinecottineau

Sept. 1, 2022, 2:17 p.m.

What do analyses of city size distributions have in common?

Clémentine Cottineau
Cottineau, C. What do analyses of city size distributions have in common?. Scientometrics 127, 1439–1463 (2022). https://doi.org/10.1007/s11192-021-04256-8
DOI:  10.1007/s11192-021-04256-8          


  Mean reproducibility score:   8.0/10   |   Number of reviews:   1

Brief Description
This paper consists in a meta-analysis of the empirical literature on Zipf's law for cities. Combining citation network analysis and bibliometrics, this meta-analysis explores the link between publication bias and reporting bias in the multidisciplinary field of quantitative urban studies / urbanism. Data and metadata includes full-texts and reference lists of 66 scientific articles published in English. Using R, the author constructed similarity networks of the 66 articles reviewed, based on their common used terms, references and cited journals. These similarity networks are taken as explanatory variables in a model of the similarity network of the distribution of Zipf estimates reported in the 66 articles. The author finds that the proximity in words frequently used by authors correlates positively with their tendency to report similar values and dispersion of Zipf estimates. The reference framework of articles also plays a role, as articles which cite similar references tend to report similar average values of Zipf estimates. As a complement to previous meta-analyses, the article sheds light on the scientific text and context mobilized to report on city size distributions. It allows to identified gaps in the corpus and potentially overlooked articles. It confirms the relationship between publication and reporting biases.
Why should we reproduce your paper?
This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.
What should reviewers focus on?

Resources


Associated event