Abstract
The credibility revolution in economics has promoted causal identification using randomized control trials (RCT ), difference-in-differences (DID), instrumental variables (IV ) and regression discontinuity design (RDD). Applying multiple approaches to over 21,000 hypothesis tests published in 25 leading economics journals, we find that the extent of p-hacking and publication bias varies greatly by method. IV (and to a lesser extent DID) are particularly problematic. We find no evidence that (i ) papers published in the Top 5 journals are different to others; (ii) the journal "revise and resubmit" process mitigates the problem; (iii) things are improving through time.
Original language | English |
---|---|
Pages (from-to) | 3634-3660 |
Number of pages | 27 |
Journal | American Economic Review |
Volume | 110 |
Issue number | 11 |
DOIs | |
Publication status | Published - Nov 2020 |
Bibliographical note
Publisher Copyright:© 2020 American Economic Association. All rights reserved.
ASJC Scopus subject areas
- Economics and Econometrics