Accentuate the negative




Suppose you run a study to compare two groups of children:
say a dyslexic group and a control group. Your favourite theory predicts a
difference in auditory perception, but you find no difference between the
groups. What to do? You may feel a further study is needed: perhaps there were
floor or ceiling effects that masked true differences. Maybe you need more
participants to detect a small effect. But what if you can’t find flaws in the
study and decide to publish the result? You’re likely to hit problems. Quite
simply, null results are much harder to publish than positive findings. In
effect, you are telling the world “Here’s an interesting theory that could
explain dyslexia, but it’s wrong.” It’s not exactly an inspirational message,
unless the theory is so prominent and well-accepted that the null finding is surprising.
And if that is the case, then it’s unlikely that your single study is going to
be convincing enough to topple the status quo. It has been recognised for years
that this “file drawer problem” leads to distortion of the research literature,
creating an impression that positive results are far more robust than they
really are (Rosenthal, 1979).



The medical profession has become aware of the issue and
it’s now becoming common practice for clinical trials to be registered before a
study commences, and for journals to undertake to publish the results of
methodologically strong studies regardless of outcome. In the past couple of
years, two early-intervention studies with null results have been published, on
autism (Green et al, 2010) and late talkers (Wake et al, 2011). Neither study
creates a feel-good sensation: it’s disappointing that so much effort and good
intentions failed to make a difference. But it’s important to know that, to
avoid raising false hopes and wasting scarce resources on things that aren’t
effective. Yet it’s unlikely that either study would have found space in a
high-impact journal in the days before trial registration.


Registration can also exert an important influence in cases
where conflict of interest or other factors make researchers reluctant to
publish null results. For instance, in 2007, Cylharova et al published a study
relating membrane fatty acid levels to dyslexia in adults. This research group
has a particular interest in fatty acids and neurodevelopmental disabilities,
and the senior author has written a book on this topic. The researchers
argued that the balance of omega 3 and omega 6 fatty acids differed between
dyslexics and non-dyslexics, and concluded: “To gain a more precise understanding of the effects of omega-3 HUFA
treatment, the results of this study need to be confirmed by blood biochemical
analysis before and after supplementation
”. They further stated that a
randomised controlled trial was underway. Yet four years later, no results have
been published and requests for information about the findings are met with
silence. If the trial had been registered, the authors would have been required
to report the results, or explain why they could not do so.


Advance registration of research is not a feasible option
for most areas of psychology, so what steps can we take to reduce publication
bias? Many years ago a wise journal editor told me that publication decisions
should be based on evaluation of just the Introduction and Methods sections of
a paper: if an interesting hypothesis had been identified, and the methods were
appropriate to test it, then the paper should be published, regardless of the
results.


People often respond to this idea saying that it would just
mean the literature would be full of boring stuff. But remember, I'm not suggesting that any old rubbish should get published: there has to be a good case for doing the study made in the Introduction, and the Methods have to be strong. Also, some kinds of boring results are important: miminally, publication of a null result may save some hapless
graduate student from spending three years trying to demonstrate an effect
that’s not there. Estimates of effect sizes in meta-analyses are compromised if
only positive findings get reported. More seriously, if we are talking about
research with clinical implications, then over-estimation of effects can lead
to inappropriate interventions being adopted.


Things are slowly changing and it’s getting easier to
publish null results. The advent of electronic journals has made a big
difference because there is no longer such pressure on page space. The
electronic journal PLOS One adopts a publication policy that is pretty close to
that proposed by the wise editor: they state they will publish all papers that
are technically sound. So my advice to those of you who have null data from
well-designed experiments languishing in that file drawer: get your findings
out there in the public domain.





References



Cyhlarova, E., Bell,
J., Dick, J., MacKinlay, E., Stein, J., & Richardson, A. (2007).
Membrane fatty acids, reading and spelling in dyslexic and non-dyslexic adults European
Neuropsychopharmacology, 17
(2), 116-121 DOI: 10.1016/j.euroneuro.2006.07.003




Green,
J., Charman, T., McConachie, H., Aldred, C., Slonims, V., Howlin, P.,
Le Couteur, A., Leadbitter, K., Hudry, K., Byford, S., Barrett, B.,
Temple, K., Macdonald, W., & Pickles, A. (2010). Parent-mediated
communication-focused treatment in children with autism (PACT): a
randomised controlled trial The Lancet, 375 (9732), 2152-2160 DOI: 10.1016/S0140-6736(10)60587-9
 




Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86 (3), 638-641 DOI: 10.1037/0033-2909.86.3.638 



Wake
M, Tobin S, Girolametto L, Ukoumunne OC, Gold L, Levickis P, Sheehan J,
Goldfeld S, & Reilly S (2011). Outcomes of population based
language promotion for slow to talk toddlers at ages 2 and 3 years:
Let's Learn Language cluster randomised controlled trial. BMJ (Clinical research ed.), 343 PMID: 21852344


G