1) Papers whose authors withheld data had more reporting errors, meaning that the reported p-value was different than the correct p-value as calculated from the coefficient and standard error (as reported in the paper). I'd really like to think that these were all just innocent typos but: in seven papers, these typos reversed findings. None of those seven authors shared their data.
2) Papers whose authors withheld data tended to have larger p-values, meaning that their results were not as "strong" in some sense. This interpretation tortures the idea of the p-value a little bit, but it certainly represents how many researchers think about p-values. It's striking that researchers who think their results are "weaker" were less likely to provide data. It also suggests that researchers who are getting a range of p-values from different, plausible models tend to pick the p-value just below 0.05 rather than the one just above. But then, we already knew that.
These charts are from the paper Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results
It should be a policy for journals to ask for replication data. Chris Blattman suggests these changes:
1. Journals should require submission of replication data and code files with final paper submissions, for posting on the journal site. (The Journal of Conflict Resolution is one of the few major political science or economics journals I know that does so faithfully.)
2. PhD field and method courses ought to encourage replication projects as term assignments. (Along with encouragements to diplomacy–something new scholars are slow to learn, to their detriment.)
No comments:
Post a Comment