Blogging in the service of science



In my last
blogpost
, I made some critical comments about a paper that was published in
2003 in the Proceedings of the National Academy of Sciences (PNAS). There were
a number of methodological failings that meant that the conclusions drawn by
the authors were questionable. But that was not the only point at issue. In
addition, I expressed concerns about the process whereby this paper had come to
be published in a top journal, especially since it claimed to provide evidence of
efficacy of an intervention that two of the authors had financial interests in.




It’s been gratifying to see how this post has sparked off
discussion. To me it just emphasises the value of tweeting and blogging in
academic life: you can have a real debate with others all over the world. Unlike the conventional method of publishing in journals,
it’s immediate. But it’s better than face-to-face debate,
because people can think about what they write, and everyone can have their
say.


There are three rather different issues that people have
picked up on.


1. The first one concerns methods in functional brain
imaging; the debate is developing nicely on Daniel Bor’s blog
and I’ll not focus on it here.


2. The second issue concerns the unusual routes by which
people get published
in PNAS
. Fellows of the National Academy of Science are able to publish
material in the journal with only “light touch” review.  In this
article
, Rand and Pfeiffer argue that this may be justified because papers
that are published via this route include some with very high citation counts.
My view is that the Temple et al article illustrates that this is a terrible
argument. Temple et al have had 270 citations, so would be categorised by Rand
and Pfeiffer as a “truly exceptional” paper. Yet, it contains basic
methodological errors that compromise its conclusions. I know some people would
use this as an argument against peer review, but I’d rather say this is an
illustration of what happens if you ignore the need for rigorous review. Of
course, peer review can go wrong, and often does. But in general, a journal’s
reputation rests on it not publishing flawed work, and that’s why I think
there’s still a role for journals in academic communications. I would urge the
editors of PNAS, however, to rethink their publication policy sp that all
papers, regardless of the authors, get properly reviewed by experts in the
field. Meanwhile, people might like to add their own examples of highly cited
yet flawed PNAS “contributions” to the comments on this blogpost.


3. The third issue is an interesting one raised by Neurocritic,
who asked “How much of the neuroimaging literature should we discard?”  Jon Simons (@js_simons) then tweeted “It’s
not about discarding, but learning”.  And, on further questioning, he added “No
study is useless. Equally, no study means anything in isolation. Indep replication
is key.”  and then “Isn't it the
overinterpretation of the findings that's the problem rather than paper
itself?” Now, I’m afraid this was a bit too much for me. My view of the Temple
et al study was that it was not so much useless as positively misleading. It
was making claims about treatment efficacy that were used to promote a particular
commercial treatment in which the authors had a financial interest. Because it
lacked a control group, it was not possible to conclude anything about the
intervention effect. So to my mind the problem was “the paper itself”, in that
the study was not properly designed. Yet it had been massively influential and
almost no-one had commented on its limitations.


At this point, Ben Goldacre (@bengoldacre) got involved. His concerns were
rather different to mine, namely “retraction / non-publication of bad papers
would leave the data inaccessible.”  Now,
this strikes me as a rather odd argument. Publishing a study is NOT the same as
making the data available. Indeed, in many cases, as in this one, the one thing
you don’t get in the publication is the data. For instance, there’s lots of
stuff in Temple et al that was not reported. We’re told very little about the
pattern of activations in the typical-reader group, for instance, and there’s a
huge matrix of correlations that was computed with only a handful actually
reported. So I think Ben’s argument about needing access to the data is beside
the point. I love data as much as he does, and I’d agree with him that it would
be great if people deposited data from their studies in some publicly available
archive so nerdy people could pick over them. But the issue here is not about
access to data. It’s about what do you do with a paper that's already published in a
top journal and is actually polluting the scientific process because its misleading conclusions are getting propagated through the literature.


My own view is that it would be good for the field if this
paper was removed from the journal, but I’m a realist and I know that won’t
happen. Neurocritic has an excellent discussion of retraction and alternatives
to retraction in a recent
post
,  which has stimulated some
great comments. As he notes, retraction is really reserved for cases of fraud or
factual error, not for poor methodology. But, depressing though this is, I’m
encouraged by the way that social media is changing the game here. The Arsenic
Life story
was a great example of how misleading, high-profile work can get
put in perspective by bloggers, even if peer reviewers haven’t done their job
properly.  If that paper had been
published five years ago, I am guessing it would have been taken far more
seriously, because of the inevitable delays in challenging it through official
publication routes. Bloggers allowed us to see not only what the flaws were,
but also rapidly indicated a consensus of concern among experts in the field. The
openness of the blogosphere means that opinions of one or two jealous or
spiteful reviewers will not be allowed to hold back good work, but equally,
cronyism just won’t be possible.  


We
already have quite a few ace neuroscientist bloggers: I hope that more will be
encouraged to enter the fray and help offer an alternative, informal commentary
on influential papers as they appear.