New paper highlights replication studies in economics

Posted: October 23rd, 2018 | Author: | Filed under: Data Sharing, EDaWaX | Tags: | Comments Off on New paper highlights replication studies in economics

Four researchers associated with the former project EDaWaX have recently published a new research article entitled “Replication studies in economics – How many and which papers are chosen for replication, and why?” The article, written by Frank Mueller-Langer, Benedikt Fecher, Dietmar Harhoff and Gert G. Wagner, sheds light on replication practice in empirical economics. The article can be found here (published in Research Policy under a Creative Commons license). Here, they provide a brief overview of the paper:

Academia is facing a quality challenge: The global scientific output doubles every nine years while the number of retractions and instances of misconduct is increasing. In this regard, replication studies can be seen as important post-publication quality checks in addition to the established pre-publication peer review process. It is for this reason that replicability is considered a hallmark of good scientific practice. In our recent research paper, we explore how often replication studies are published in empirical economics and what types of journal articles are replicated. We find that between 1974 and 2014 0.1% of publications in the top 50 economics journals were replication studies. We provide empirical support for the hypotheses that higher-impact articles and articles by authors from leading institutions are more likely to be replicated, whereas the replication probability is lower for articles that appeared in top 5 economics journals. Our analysis also suggests that mandatory data disclosure policies may have a positive effect on the incidence of replication.

Scientific research plays an important role in the advancement of technologies and the fostering of economic growth. Hence, the production of thorough and reliable scientific results is crucial from a social welfare and science policy perspective. However, in times of increasing retractions and frequent instances of inadvertent errors, misconduct or scientific fraud, scientific quality assurance mechanisms are subject to a high level of scrutiny. Issues regarding the replicability of scientific research have been reported in multiple scientific fields, most notably in psychology. A report by the Open Science Collaboration from 2015 estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals. Overall, only 36% of the replications yielded statistically significant effects compared to 97% of the original studies that had statistically significant results. However, similar issues have been reported from other fields. For example, Camerer and colleagues attempted to replicate 18 studies published in two top economic journals—the American Economic Review and the Quarterly Journal of Economics—between 2011 and 2014 and were able to find a significant effect in the same direction as proposed by the original research in only 11 out of 18 replications (61%). Considering the impact that economic research has on society, for example in a field like evidence-based policy making, there is a particular need to explore and understand the drivers of replication studies in economics in order to design favorable boundary conditions for replication practice.

We explore formal, i.e., published, replication studies in economics by examining which and how many published papers are selected for replication and what factors drive replication in these instances. To this extent, we use metadata about all articles published in the top 50 economics journals between 1974 and 2014. While there are also informal replication studies that are not published in scientific journals (especially replications conducted in teaching or published as working papers) and an increasing number of other forms of post-publication review (e.g., discussions on websites such as PubPeer), these are not covered with our approach.

We find that between 1974 and 2014 0.1% of publications in the top 50 economics journals were replication studies. We find evidence that replication is a matter of impact: higher-impact articles and articles by authors from leading institutions are more likely to be replicated, whereas the replication probability is lower for articles that appeared in top 5 economics journals. Our analysis also suggests that mandatory data disclosure policies may have a positive effect on the incidence of replication.

Based on our findings, we argue that replication efforts could be incentivized by reducing the cost of replication, for example by promoting data disclosure. Our results further suggest that the decision to conduct a replication study is partly driven by the replicator’s reputation considerations. Arguably, the low number of replication studies being conducted could potentially increase if replication studies received more formal recognition (for instance, through publication in [high-impact] journals), specific funding, (for instance, for the replication of articles with a high impact on public policy), or awards. Since replication is, at least partly, driven by reputational rewards, it may be a viable strategy to document and reward formal as well as informal replication practices.


Comments are closed.