Here is an interesting article about reproducibility in science -- a really important concept that falls by the wayside more often than it should.
There is a daunting amount of research that is published monthly in every scientific field, including the social sciences. Papers often contradict each other, indicating unreliable results or results that are specific to the conditions in which the experiments were performed, or the group that was surveyed, etc. Many other published works, though, go unchallenged for months or years before anyone else attempts to recreate the experiment, or improves the experimental method or designs a new method. In this time, other work may be done that accepts these results as fact and begins a new phase of experimentation that builds on the inherent assumption that the prior work was true. This can result in impossible projects or increasingly stacked publication of unreliable data, if the initial assumption was a bad one.
This is why projects like the "Reproducibility Initiative," discussed in the linked article, are so important:
The initiative is a great idea and offers an important service. We need more resources like this in science, double-checking results to verify their validity. We need these resources to be available in a trustworthy environment, so that lead scientists feel comfortable sharing their data and techniques without fear of getting "scooped" -- I believe that this (justifiable) fear is what prevents a lot of these checks from happening in the first place, so these "validation labs" would need to be neutral, non-competitive environments.
The diseases we are studying are complicated enough as is, and we don't need bad data or results obscuring their true nature and causes any more than the human body already does.
There is a daunting amount of research that is published monthly in every scientific field, including the social sciences. Papers often contradict each other, indicating unreliable results or results that are specific to the conditions in which the experiments were performed, or the group that was surveyed, etc. Many other published works, though, go unchallenged for months or years before anyone else attempts to recreate the experiment, or improves the experimental method or designs a new method. In this time, other work may be done that accepts these results as fact and begins a new phase of experimentation that builds on the inherent assumption that the prior work was true. This can result in impossible projects or increasingly stacked publication of unreliable data, if the initial assumption was a bad one.
This is why projects like the "Reproducibility Initiative," discussed in the linked article, are so important:
"‘Published' and ‘true' are not synonyms," said Brian Nosek, a psychology professor at the University of Virginia in Charlottesville and a member of the initiative's advisory board.
Last year, Bayer Healthcare reported that its scientists could not reproduce some 75 percent of published findings in cardiovascular disease, cancer and women's health.
In March, Lee Ellis of M.D. Anderson Cancer Center and C. Glenn Begley, the former head of global cancer research at Amgen, reported that when the company's scientists tried to replicate 53 prominent studies in basic cancer biology, hoping to build on them for drug discovery, they were able to confirm the results of only six.
The new initiative, said Begley, senior vice president of privately held biotechnology company TetraLogic, "recognizes that the problem of non-reproducibility exists and is taking the right steps to address it."Those are some pretty awful statistics. Without knowing exactly what studies these scientists were able to reproduce, the use of the word "prominent" suggests to me that these could have been studies in higher-impact journals, which are supposed to have more rigorous peer-review and editing standards in order to avoid publishing sloppy science. The top journals are not completely immune to publication of "bad" or problematic studies, but it is supposed to happen less often.
The initiative is a great idea and offers an important service. We need more resources like this in science, double-checking results to verify their validity. We need these resources to be available in a trustworthy environment, so that lead scientists feel comfortable sharing their data and techniques without fear of getting "scooped" -- I believe that this (justifiable) fear is what prevents a lot of these checks from happening in the first place, so these "validation labs" would need to be neutral, non-competitive environments.
The diseases we are studying are complicated enough as is, and we don't need bad data or results obscuring their true nature and causes any more than the human body already does.
That is a really cool initiative! Thanks for sharing. I've always wondered how often people go out and reproduce experiments. Especially when the result wasn't very interesting... maybe they did it wrong! Maybe they had a sloppy undergraduate helping out who ruined things and now the whole world misses out on a cure to cancer. haha
ReplyDelete