I am a big fan of science and so am happy to see that current problems with scientific research are being tackled.
How one psychologist is tackling human biases in science.
Given that science has uncovered a dizzying variety of cognitive biases,
the relative neglect of their consequences within science itself is
peculiar. "I was aware of biases in humans at large," says Hartgerink,
"but when I first 'learned' that they also apply to scientists,
I was somewhat amazed, even though it is so obvious."
One of the reasons the science literature gets skewed is that journals
are much more likely to publish positive than negative results:
It's easier to say something is true than to say it's wrong.
Journal referees might be inclined to reject negative results as too
boring, and researchers currently get little credit or status, from
funders or departments, from such findings. "If you do 20 experiments,
one of them is likely to have a publishable result," Oransky and Marcus
write. "But only publishing that result doesn't make your findings valid.
In fact it's quite the opposite."
...
Surprisingly, Nosek thinks that one of the most effective solutions
to cognitive bias in science could come from the discipline that
has weathered some of the heaviest criticism recently for its
error-prone and self-deluding ways: pharmacology. It is precisely
because these problems are so manifest in the pharmaceutical
industry that this community is, in Nosek's view, way ahead of
the rest of science in dealing with them. For example, because of
the known tendency of drug companies and their collaborators to
report positive results of trials and to soft-pedal negative ones,
it is now a legal requirement in the Unites States for all clinical
trials to be entered in a registry before they begin. This obliges
the researchers to report the results whatever they say.
...
The idea, says Nosek, is that researchers "write down in advance
what their study is for and what they think will happen." Then
when they do their experiments, they agree to be bound to analyzing
the results strictly within the confines of that original plan. It
sounds utterly elementary, like the kind of thing we teach children
about how to do science. And indeed it is--but it is rarely what
happens. Instead, as Fiedler testifies, the analysis gets made
on the basis of all kinds of unstated and usually unconscious
assumptions about what would or wouldn't be seen. Nosek says that
researchers who have used the
OSF*
have often been amazed at how,
by the time they come to look at their results, the project has
diverged from the original aims they'd stated.
Tracking retractions as a window into the scientific process.
So why write a blog on retractions?
First, science takes justifiable pride in the fact that it is
self-correcting -- most of the time. Usually, that just means
more or better data, not fraud or mistakes that would require
a retraction. But when a retraction is necessary, how long does
that self-correction take? The Wakefield retraction, for example,
was issued 12 years after the original study, and six years after
serious questions had been raised publicly by journalist
Brian Deer.
Retractions are therefore a window into the scientific
process.
Second, retractions are not often well-publicized. Sure, there are
the high-profile cases such as Reuben's and Wakefield's. But most
retractions live in obscurity in Medline and other databases. That
means those who funded the retracted research -- often taxpayers
-- aren't particularly likely to find out about them. Nor are
investors always likely to hear about retractions on basic science
papers whose findings may have formed the basis for companies into
which they pour dollars. So we hope this blog will form an informal
repository for the retractions we find, and might even spur the
creation of a retraction database such as the one called for here
by K.M Korpela.
Third, they're often the clues to great stories about fraud or
other malfeasance, as Adam learned when he chased down the Reuben
story. The reverse can also be true. The Cancer Letter's expose of
Potti and his fake Rhodes Scholarship is what led his co-authors to
remind The Lancet Oncology of their concerns, and then the editors
to issue their expression of concern. And they can even lead to
lawsuits for damaged reputations. If highlighting retractions will
give journalists more tools to uncover fraud and misuse of funds,
we're happy to help. And if those stories are appropriate for our
respective news outlets, you'll only read about them on Retraction
Watch once we've covered them there.
Finally, we're interested in whether journals are consistent. How
long do they wait before printing a retraction? What requires
one? How much of a public announcement, if any, do they make? Does a
journal with a low rate of retractions have a better peer review and
editing process, or is it just sweeping more mistakes under the rug?
Crowd-sourced effort raises nuanced questions about what counts as replication.
An ambitious effort to replicate 100 research findings in psychology
ended last week -- and the data look worrying. Results posted online
on 24 April, which have not yet been peer-reviewed, suggest that key
findings from only 39 of the published studies could be reproduced.
But the situation is more nuanced than the top-line numbers suggest
(See graphic, 'Reliability test'). Of the 61 non-replicated studies,
scientists classed 24 as producing findings at least "moderately similar"
to those of the original experiments, even though they did not meet
pre-established criteria, such as statistical significance, that would
count as a successful replication.
More stringent quality criteria are needed for models used at the science/policy interface, and here is a checklist to aid in the responsible development and use of models.
Against this background of declining trust and increasing problems
with the reliability of scientific knowledge in the public sphere,
the dangers for science become most evident when models-abstracts
of more complex real-world problems, generally rendered in mathematical
terms-are used as policy tools. Evidence of poor modeling practice
and of negative consequences for society abounds.
...
The situation is equally serious in the field of environmental
regulatory science. Orrin Pilkey and Linda Pilkey-Jarvis,
in a stimulating small volume titled
Useless Arithmetic:
Why Environmental Scientists Can't Predict the Future,
offer a particularly accessible series of horror stories about model
misuse and consequent policy failure. They suggest, for example, that the
global change modeling community should publicly recognize that the
effort to quantify the future at a scale that would be useful for
policy is an academic exercise. They call modeling counterproductive
in that it offers the illusion of accurate predictions about climate
and sea level decades and even centuries in the future. Pilkey and
Pilkey-Jarvis argue that given the huge time scales, decisionmakers
(and society) would be much better off without such predictions,
because the accuracy and value of the predictions themselves end up
being at the center of policy debates, and distract from the need
and capacity to deal with the problem despite ongoing uncertainties.
...
In this light, we wish to revisit statistician George E. P. Box's
1987 observation that "all models are wrong but some are useful."
We want to propose a key implication of Box's aphorism for science
policy: that stringent criteria of transparency must be adopted
when models are used as a basis for policy assessments. Failure
to open up the black box of modeling is likely to lead only to
greater erosion of the credibility and legitimacy of science as a
tool for improved policymaking. In this effort, we will follow The
Economist's recommendations and provide a checklist, in the form
of specific rules for achieving this transparency.
Scott Adams
What's is science's biggest fail of all time?
I nominate everything about diet and fitness.
...
Science is an amazing thing. But it has a credibility issue that it earned.
Should we fix the credibility situation by brainwashing skeptical citizens
to believe in science despite its spotty track record, or is society's
current level of skepticism healthier than it looks? Maybe science is
what needs to improve, not the citizens.
Scientists like to think of science as self-correcting. To an alarming degree, it is not. The Economist, Oct 19th 2013
Dr Kahneman and a growing number of his colleagues fear that a lot of
this priming research is poorly founded. Over the past few years
various researchers have made systematic attempts to replicate some
of the more widely cited priming experiments.
Many of these replications have failed.
In 2005 John Ioannidis, an epidemiologist from Stanford University,
caused a stir with a paper showing why, as a matter of statistical logic,
the idea that only one such paper in 20 gives a false-positive result
was hugely optimistic. Instead, he argued, "most published research
findings are probably false."
The Project on Scientific Knowledge and Public Policy
The Project on Scientific Knowledge and Public Policy (2002-2008)
examined the nature of science and how it is used and misused in
government decision-making and legal proceedings. Through empirical
research, conversations among scholars, and publications, SKAPP
aimed to enhance understanding of how knowledge is generated and
interpreted. SKAPP promoted transparent decision-making, based on
the best available science, to protect public health.
How and why science works may be difficult for non-scientists to
understand. The aura around science and scientists - reflecting the
power of scientific understanding and its complexity - creates
opportunities for misunderstanding and misuse of scientific
evidence. Indeed, failure on the part of decision-makers to
understand the norms of science may lead to inaccurate conclusions
and inappropriate applications of scientific results.