Research Collected: Evidence for Skepticism of Research

I enjoy reading research and thinking about the implications of research. However, I often have to remind myself that there is reason to be skeptical about any individual study. Often I prefer to wait until a number of studies fit a particular narrative or hypothesis before gaining confidence in that research. When I share research here it’s usually from a “isn’t this interesting” perspective, not a “this is true” perspective. Ideally a lot more research would be replicated than is currently occurring. So far, examinations into reproducibility of research has shown low reproducibility, even on studies which have become actionable in health care fields. This collection is a set of papers examining some issues with the accuracy of scientific research.

Why Most Published Research Findings Are False (2005, PLOS Medicine)

“There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field… Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

Drug development: Raise standards for preclinical cancer research (2012, Nature)

“Over the past decade, before pursuing a particular line of research, scientists… in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies… It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases.”

Believe it or not: how much can we rely on published data on potential drug targets? (2011, Nature Reviews Drug Discovery)

“We received input from 23 scientists (heads of laboratories) and collected data from 67 projects, most of them (47) from the field of oncology. This analysis revealed that only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings… In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects because the evidence that was generated for the therapeutic hypothesis was insufficient to justify further investments into these projects.”

Estimating the reproducibility of psychological science (2014, Science)

“We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available… Ninety-seven percent of original studies had significant results (P < .05). Thirty-six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects.”

Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time (2015, PLUS ONE)

“17 of 30 studies (57%) published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8%) trials published after 2000… There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.”

Scientific method: Statistical errors (2014, Nature)

“According to one widely used calculation5, a P value of 0.01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of 0.05 raises that chance to at least 29%… Critics also bemoan the way that P values can encourage muddled thinking. A prime example is their tendency to deflect attention from the actual size of an effect… But significance is no indicator of practical relevance…”

Drug Companies & Doctors: A Story of Corruption (2009, The New York Review of Books)

“conflicts of interest and biases exist in virtually every field of medicine, particularly those that rely heavily on drugs or devices. It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.”

Leave a Reply

Your email address will not be published. Required fields are marked *