A recent paper made a disturbing claim about the state of science: research that is not replicable is cited more frequently than research that can be replicated.In other words, according to the report Scientific progress, Bad science seems to receive more attention than good science.
This paper follows up reports of the “copy crisis” in psychology, in which a large number of academic papers put forward results that other researchers could not replicate, and claimed the problem Not limited to psychologyThis is important for several reasons. If a large part of science does not meet the reproducibility norms, then this work will not provide a solid basis for decision-making. Failure to replicate the results may delay the application of science in the development of new drugs and technologies. It could also undermine public trust and make it more difficult for Americans to get vaccinated or deal with climate change.Money spent on invalid science is wasted: one Research release In the United States alone, the cost of non-replicable medical research is as high as $28 billion per year.
In this new study, the authors tracked psychology journals, economics journals, and science and nature Recorded copy failed. The results were disturbing: papers that could not be copied were cited more than the average number of times, and even after the publication of reproducibility failures, only 12% of post-exposure citations admitted failure.
These results are the same as Study in 2018An analysis of the cascade of 126,000 rumors on Twitter shows that fake news spreads faster and reaches more people than verified true statements. It also found that robots spread true and false news in equal proportions: it is people, not robots, that are responsible for disproportionately false dissemination on the Internet.
A potential explanation for these findings involves a double-edged sword. The academic world values novelty: new discoveries, new results, “frontier” and “disruptive” research. On the one hand, this makes sense. If science is a process of discovery, then papers that provide new and surprising things are more likely to represent a possible huge improvement than papers that strengthen the existing knowledge base or moderately expand their applicable fields. In addition, scholars and laymen agree that surprises are more interesting (and of course more interesting) than predictable, normal, and everyday. No editor wanted to be someone who rejected the papers that later became the foundation of the Nobel Prize. The problem is that the surprising results are surprising, because they run counter to what our experience so far has led us to believe, which means they are very likely to be wrong.
Authors of citation studies believe that reviewers and editors apply lower standards for “gorgeous” or dramatic papers, and highly interesting papers will attract more attention, discussion, and Reference. In other words, there is a tendency to favor novelty. The authors of the Twitter study also pointed to novelty as the culprit: They found that fake news that spread quickly on the Internet was significantly more unusual than real news.
Novel statements can be very valuable. If something surprises us, it means that we might learn something from it. The operative word here is “possible” because this premise assumes that surprising things are at least partially true. But sometimes things are surprising and wrong.All of these indicate that researchers, reviewers and editors should take steps to correct their bias towards novelty, and Suggest How to do this has already been proposed.
still have a question.As the author of the citation study pointed out, the focus of many replication studies is Splattered papers It has received a lot of attention. But these are more likely than average to be unable to withstand further scrutiny. Comments focused on ostentatious, compelling papers will not reflect the entire science—the failure of representative norms. In one case I discussed elsewhere, a paper labeling reproducibility issues failed to reveal the researchers’ own methods, but the paper has been-yes-highly cited. So scientists must be careful that when they seek to mark papers that cannot be copied, they will not create their own flashy but untenable claims.