The coins are often very old by the time they reach the jeweller
With his hands and ashes he will try the best he can
He knows that he can only shine them
Cannot repair the scratches
With his hands and ashes he will try the best he can
He knows that he can only shine them
Cannot repair the scratches
The Jeweller (by Pearls Before Swine)
Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for schizophrenia.
Niemeyer & Musch & Pietrowsky (2012)
Niemeyer & Musch & Pietrowsky (2012)
Dan Dreiberg: I'm not the one still hiding behind a mask; Rorschach: No. You're hiding in plain sight. Interaction from The Watchmen
Hiding things in plain sight is often the best place to hide them! In my last blog, I referred to Daylight Robbery Syndrome where researchers say things so boldly that readers may be convinced of their validity even when they are not consistent with the data. Here I would like to refer to how researchers may intentionally or unintentionally hide something in front of the reader. In particular, how the application of methods used for detecting bias in meta-analyses may themselves be prone to their own biases.
This current paper, published recently in Schizophrenia Research, examines publication bias in studies of "psychotherapeutic interventions for schizophrenia".
Opening Scene from the Watchmen (Unforgettable by Nat King Cole)
As the authors Niemeyer et al rightly state:
"Meta-analyses are prone to publication bias, the problem of selective publication of studies with positive results. It is unclear whether the efficacy of psychotherapeutic interventions for schizophrenia is overestimated due to this problem. This study aims at enhancing the validity of the results of meta-analyses by investigating the degree and impact of publication bias."
This is certainly true for trials of psychological interventions, where the decision to submit a paper for publication is related to the outcome of the trial. For example, Coursol and Wagner (1986) found that when therapeutic studies had positive outcomes (i.e. clients improved) 82% submitted their paper, but with negative outcomes (client did not improve) only 43% submitted their articles (for similar conclusions from a recent meta analysis, see Hopewell et al 2009).
Returning to the current paper, Niemeyer et al used current standard meta-analytic methods to estimate bias, including: Begg and Mazumdar's adjusted rank correlation test, Egger's regression analysis and the trim and fill procedure. They applied these techniques to data sets derived from systematic reviews up to September 2010. I have remarked on these bias methods briefly in my previous post 'Negativland'.
Following their analyses Niemeyer et al concluded:
"Overall, we found only moderate evidence for the presence of publication bias. With one notable exception, the pattern of efficacy of psychotherapy for schizophrenia was not changed in the data sets in which publication bias was found. Several efficacious therapies exist, and their efficacy does not seem to be the result of publication bias."
This apparent lack of bias in this paper might be contrasted with the large bias documented by Cuipjers et al (2010) in studies examining CBT for depression. Cuipjers and colleagues found an effect size of .67 in 175 comparisons comparing CBT to a control condition; however adjustment for publication bias according to Duval & Tweedie’s trim and fill procedure reduced the mean effect size to 0.42 with 51 studies assumed to be missing i.e residing in file drawers because they were negative.
The Jeweler (by Pearls Before Swine)
I intend to concentrate briefly on the 10 data sets from meta-analyses for studies of CBT for schizophrenia (2 data sets from Lynch et al 2010; 1 from Lincoln et al 2008; 1 from Wykes et al 2008; 5 from Zimmerman et al 2005; and 1 from Jones et al 2010: see Table 1)
Table 1. Bias analysis of CBT for schizophrenia meta analyses (from Niemeyer & Musch 2012)
1) Our meta analysis (Lynch, Laws & McKenna 2010) criticised for being overly selective by some (because we analysed high quality studies using an active control group!) produced a data set with the fewest imputed (i.e. missing studies)
2) The Wykes et al (2008) analysis is curious. First the authors state that the effect size was Cohen's d - when in fact Glass' delta was used (a quite different effect size). This could of course be a simple error. The choice of outcome variable, however, is not an error - the authors chose to analyse Low Quality studies from the Wykes paper. Why would a study of bias select only low quality studies and not the high quality or at least both?
Table 2 shows where these (positive symptom) effect sizes were derived from the Wykes et al paper. What is clear is that the low quality studies are not significantly heterogeneous, while the high quality studies show significant heterogeneity - indeed this is partly borne out by the far broader spread of scores for the 95% confidence intervals in high quality studies (even though they constitute almost half the number of low quality studies)
Table 2 Effect sizes from Wykes et al (2008)
By selecting low quality studies, it would seem that the probability of finding bias may be diminished and at the very least, the estimate of bias is unreliable
3) From the Lincoln et al (2008) meta-analysis, the authors selected data for 9 studies comparing CBT vs TAU. Omitted, however, was an additional comparison of 10 studies of CBT vs active control (see Table 3 below)
Table 3. Data from Lincoln et al (2008) Meta-analysis
The notable thing again is that the authors chose to exclude one analysis that has: a) a larger sample b) a non-significant effect and c) far greater 95% Confidence Intervals i.e. variance. Again, these factors could obviously conspire against finding bias and leave us uncertain about bias in this meta-analysis.
4) From the Zimmerman et al meta analysis, the authors included 5 analyses and all bar the last had proportionally large numbers of imputed studies. The one comparison that produced no imputed studies was the comparison with an 'active' control (like Lynch et al - which also produced no imputed studies)
5) Finally Niemeyer & Musch selected one comparison from the Jones et al (2010) Cochrane meta-analysis - a somewhat odd choice to be included - measuring 'relative risk for leaving a study early'.
These decisions are made somewhat odder and less reliable by the fact that Niemeyer & Musch failed to include any data from meta-analyses by the same Cochrane group examining symptoms (Jones et al 2004) or indeed other meta-analyses such as that by Rector and Beck (2002) or the UK NICE Committee (2009)
Anyway, to conclude, the bias analysis of CBT for Psychosis by Niemeyer et al is itself biased by unexplained choices made by the reviewers themselves. Not being psychic, I have no idea why they made these choices or what differences it would make to include different measures and additional meta-analyses. One thing I do know, however, is that any claim that bias does not exist in studies examining CBT for psychosis...is a biased and unreliable! Researchers are familiar with the idea of GIGO (Garbage In Garbage Out), which has often been levied at meta-analysis - perhaps we now need to consider METAGIGO!
3) From the Lincoln et al (2008) meta-analysis, the authors selected data for 9 studies comparing CBT vs TAU. Omitted, however, was an additional comparison of 10 studies of CBT vs active control (see Table 3 below)
Table 3. Data from Lincoln et al (2008) Meta-analysis
The notable thing again is that the authors chose to exclude one analysis that has: a) a larger sample b) a non-significant effect and c) far greater 95% Confidence Intervals i.e. variance. Again, these factors could obviously conspire against finding bias and leave us uncertain about bias in this meta-analysis.
4) From the Zimmerman et al meta analysis, the authors included 5 analyses and all bar the last had proportionally large numbers of imputed studies. The one comparison that produced no imputed studies was the comparison with an 'active' control (like Lynch et al - which also produced no imputed studies)
5) Finally Niemeyer & Musch selected one comparison from the Jones et al (2010) Cochrane meta-analysis - a somewhat odd choice to be included - measuring 'relative risk for leaving a study early'.
These decisions are made somewhat odder and less reliable by the fact that Niemeyer & Musch failed to include any data from meta-analyses by the same Cochrane group examining symptoms (Jones et al 2004) or indeed other meta-analyses such as that by Rector and Beck (2002) or the UK NICE Committee (2009)
Anyway, to conclude, the bias analysis of CBT for Psychosis by Niemeyer et al is itself biased by unexplained choices made by the reviewers themselves. Not being psychic, I have no idea why they made these choices or what differences it would make to include different measures and additional meta-analyses. One thing I do know, however, is that any claim that bias does not exist in studies examining CBT for psychosis...is a biased and unreliable! Researchers are familiar with the idea of GIGO (Garbage In Garbage Out), which has often been levied at meta-analysis - perhaps we now need to consider METAGIGO!
One more reason to produce a mandatory study registration system for all intervention studies that hope to be published in academic journals. Without this, we will never know the true extent of any bias. Such a system would need to include various basic information such as a list of the hypotheses to be tested and the anticipated analysis methods.
ReplyDelete