Saturday, 22 December 2012

Santa Dog

Santa Dog's a Jesus Fetus
Santa Dog's a Jesus Fetus
Santa Dog's a Jesus Fetus
Has no presents,
Has no presence
In the future...
...In the future
I started blogging mid March 2012 and until now, made 18 posts. As we approach Christmas and the end of the year, here are my Top 5 in terms of views
Santa Dog by the Residents (my favourite xmas song)

Number 5: Whats Your Poison - LSD vs Alcohol


At number 5 was my very first post on the meta-analysis by Krebs & Johansen suggesting using LSD as a treatment for alcoholism. The study attrated a lot of media attention and the following striking comment from Prof David Nutt that: "Overall there is a big effect, show me another treatment with results as good; we've missed a trick here…This is probably as good as anything we've got [for treating alcoholism]." Needless to say, I was not as convinced and my blog drew some ire from one of the authors, who e-mailed me asking me to refrain from any more public comments and then posted their own response on the Nature wesbite (strangely through an intermediary!)


Number 4: Strange Fruit: Is Racism a Mental Illness? 

At Number 4 was my blog on whether racism could be a form of mental illness. Originally spurred by a newspaper article about a man who tried to use his schizophrenia diagnosis to mitigate his violent racist behaviour in court...and failed; and a subsequent Twitter interaction with @JonesNev, who argued that schizophrenia "often does lead otherwise liberal, kind, non-racist people to become glaringly, bluntly racist, sexist, phobic, nymphomanic, hostile". Anyway, it led me to investigate past claims from psychiatrists that racism is an abnormal belief that could qualify as a delusion.

Number 3: CBT: She's Lost Control(s) Again

Number 3 was one of my several blogs about CBT. In this case, the first study to examine using CBT to alleviate symptoms in cases of psychosis where the individuals have chosen to be unmedicated. Of course, the study attracted wide media attention and for me, premature attention for their unfinished trial of CBT in unmedicated psychosis (e.g on BBC Radio 4 All in the Mind link in the blog). The study contains major methodological flaws, nobody should view it as other than fatally flawed and I await the results of their properly controlled trial for some interpretable data.

Number 2: CBT: You Spin me Round

At number 2, another CBT study and with the same main author as the unmedicated trial at number 3. This time it was using CBT to prevent transition to psychosis. The main upshot here is that it failed to show any effect - yet the authors went to extraordinary lengths to spin the results positively. We should not blame journalists when results are unreasonably/falsely represented in the media - however in this case: authors, media and even the journal (British Medical Journal) were at fault for spinning unanimously negative results

Number 1: Negativland: What to do about negative findings?

By a clear margin, my Negativland blog received the most hits. This post concerned the issue of how we deal with negative findings in science (psychology). Since this post, a great deal of much-needed discussion has occurred around this issue and about how psychology in particular might get its house in order. I have subsequently much expanded the ideas in this blog and hopefully, an article will appear in the Open Access journal BMC_Psychology in the New Year.
I have enjoyed my 10-month foray into blogging and much appreciate all of the feedback and interactions that have stemmed from this - thanks for your interest
Merry Xmas and a Happy New Year

Tuesday, 11 December 2012

Significantly nonsignificant

In the morning I'd awake and I couldn't remember
What is love and what is hate? - the calculations error

Flaming Lips (Morning of the Magicians)

Cognitive behaviour therapy for psychosis can be adapted for minority ethnic groups: A randomised controlled trial
Shanaya Rathod, Peter Phiri, Scott Harris, Charlotte Underwood, Mahesh Thagadur, Uma Padmanabi & David Kingdon

Recently I have commented on what I see as methodologically poor, biased and spun studies of CBT for psychosis - every time I want to blog on someting else, out comes another questionable study - this time 'in press' at Schizophrenia Research, which I think requires comment - in particular, the gap between the reality and presentation of findings

This is an RCT looking at the use of CBT in UK minority groups with a diagnosis of schizophrenia
A total, n = 33 participants, who were randomly allocated to CBT for psychosis (CBTp n = 16) and treatment as usual (TAU n = 17). Although a relatively small study, the authors did a power analysis based on previous pilot studies and suggest that a minimum of 12 per arm of the trial would be sufficiently powered - obviously they exceed this expectation.

Did the CBT group show any benefit (i.e. reduction of symptoms) over the TAU group?

Well, the authors say yes in the abstract:
Results: Post-treatment, the intervention group showed statistically significant reductions in symptomatology on overall CPRS scores, CaCBTp Mean (SD) = 16.23 (10.77), TAU = 18.60 (14.84); p = 0.047,with a difference in change of 11.31 (95% CI:0. 14 to 22.49); Schizophrenia change: CaCBTp = 3.46 (3.37); TAU = 4.78 (5.33) diff 4.62 (95% CI: 0.68 to 9.17); p = 0.047 and positive symptoms (delusions; p = 0.035, and hallucinations; p = 0.056). At 6 months follow-up, MADRAS change = 5.6 (95% CI: 2.92 to 7.60); p < 0.001. Adjustment was made for age, gender and antipsychotic medication.  
Conclusion: Participants in the CaCBTp group achieved statistically significant results post-treatment compared to those in the TAU group with some gains maintained at follow-up. High levels of satisfaction with the CaCBTp were reported

Flaming Lips - In the Morning of the Magicians

The key aspect here though is the bolded statement, added casually at the end of the results about 'adjustment'.  Table 1 from the paper presented below indicates (as the authors rightly admit) that the TAU group were older, had longer duration of illness and greater medication and they duly adjusted analyses for these variables

However, the abstract results all refer to unadjusted scores - see the Table 2 below - all are taken from the non-adjusted column. If you glance to the final column (Adjusted reduction from baseline) - not a single comparison is significant!

Table 2 Results

Maybe someone can explain to me, what is happening here? Is it blatant author spinning to get further funding (as they mention this in the discussion). Is it collusion from reviewers, who let this through? Failure to spot the obvious by reviewers?
Perhaps I am overly critical - letters on a postcard (in the comments section please)

Monday, 3 December 2012

Who Watches the Watchmen? Bias in Studying Bias

The coins are often very old by the time they reach the jeweller
With his hands and ashes he will try the best he can
He knows that he can only shine them
Cannot repair the scratches
The Jeweller (by Pearls Before Swine)
Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for schizophrenia.
Niemeyer & Musch & Pietrowsky (2012)
Dan Dreiberg: I'm not the one still hiding behind a mask; Rorschach: No. You're hiding in plain sight. Interaction from The Watchmen 

Hiding things in plain sight is often the best place to hide them! In my last blog, I referred to Daylight Robbery Syndrome where researchers say things so boldly that readers may be convinced of their validity even when they are not consistent with the data. Here I would like to refer to how researchers may intentionally or unintentionally hide something in front of the reader. In particular, how the application of methods used for detecting bias in meta-analyses may themselves be prone to their own biases.
This current paper, published recently in Schizophrenia Research, examines publication bias in studies of "psychotherapeutic interventions for schizophrenia".
 Opening Scene from the Watchmen (Unforgettable by Nat King Cole)

As the authors Niemeyer et al rightly state:
"Meta-analyses are prone to publication bias, the problem of selective publication of studies with positive results. It is unclear whether the efficacy of psychotherapeutic interventions for schizophrenia is overestimated due to this problem. This study aims at enhancing the validity of the results of meta-analyses by investigating the degree and impact of publication bias."

This is certainly true for trials of psychological interventions, where the decision to submit a paper for publication is related to the outcome of the trial. For example, Coursol and Wagner (1986) found that when therapeutic studies had positive outcomes (i.e. clients improved) 82% submitted their paper, but with negative outcomes (client did not improve) only 43% submitted their articles (for similar conclusions from a recent meta analysis, see Hopewell et al 2009).
Returning to the current paper, Niemeyer  et al used current standard meta-analytic methods to estimate bias, including: Begg and Mazumdar's adjusted rank correlation test, Egger's regression analysis and the trim and fill procedure. They applied these techniques to data sets derived from systematic reviews up to September 2010. I have remarked on these bias methods briefly in my previous post 'Negativland'.

Following their analyses Niemeyer  et al concluded:
"Overall, we found only moderate evidence for the presence of publication bias. With one notable exception, the pattern of efficacy of psychotherapy for schizophrenia was not changed in the data sets in which publication bias was found. Several efficacious therapies exist, and their efficacy does not seem to be the result of publication bias."
This apparent lack of bias in this paper might be contrasted with the large bias documented by Cuipjers et al (2010) in studies examining CBT for depression. Cuipjers and colleagues found an effect size of .67 in 175 comparisons comparing CBT to a control condition; however adjustment for publication bias according to Duval & Tweedie’s trim and fill procedure reduced the mean effect size to 0.42 with 51 studies assumed to be missing i.e residing in file drawers because they were negative.


The Jeweler (by Pearls Before Swine) 

I intend to concentrate briefly on the 10 data sets from meta-analyses for studies of CBT for schizophrenia (2 data sets from Lynch et al 2010; 1 from Lincoln et al 2008; 1 from Wykes et al 2008; 5 from Zimmerman et al 2005; and 1 from Jones et al 2010: see Table 1)
Table 1. Bias analysis of CBT for schizophrenia meta analyses (from Niemeyer & Musch 2012)

A few notable features about Table 1 and the analysis of bias

1) Our meta analysis (Lynch, Laws & McKenna 2010) criticised for being overly selective by some (because we analysed high quality studies using an active control group!) produced a data set with the fewest imputed (i.e. missing studies)

2) The Wykes et al (2008) analysis is curious. First the authors state that the effect size was Cohen's d - when in fact Glass' delta was used (a quite different effect size). This could of course be a simple error. The choice of outcome variable, however, is not an error - the authors chose to analyse Low Quality studies from the Wykes paper. Why would a study of bias select only low quality studies and not the high quality or at least both? 

Table 2 shows where these (positive symptom) effect sizes were derived from the Wykes et al paper. What is clear is that the low quality studies are not significantly heterogeneous, while the high quality studies show significant heterogeneity - indeed this is partly borne out by the far broader spread of scores for the 95% confidence intervals in high quality studies (even though they constitute almost half the number of low quality studies)

Table 2 Effect sizes from Wykes et al (2008)

By selecting low quality studies, it would seem that the probability of finding bias may be diminished and at the very least, the estimate of bias is unreliable

3) From the Lincoln et al (2008) meta-analysis, the authors selected data for 9 studies comparing CBT vs TAU. Omitted, however, was an additional comparison of 10 studies of CBT vs active control (see Table 3 below)

Table 3. Data from Lincoln et al (2008) Meta-analysis

The notable thing again is that the authors chose to exclude one analysis that has: a) a larger sample b) a non-significant effect and c) far greater 95% Confidence Intervals i.e. variance. Again, these factors could obviously conspire against finding bias and leave us uncertain about bias in this meta-analysis.

4) From the Zimmerman et al meta analysis, the authors included 5 analyses and all bar the last had proportionally large numbers of imputed studies. The one comparison that produced no imputed studies was the comparison with an 'active' control (like Lynch et al - which also produced no imputed studies)

5) Finally Niemeyer & Musch selected one comparison from the Jones et al (2010) Cochrane meta-analysis - a somewhat odd choice to be included - measuring 'relative risk for leaving a study early'.

These decisions are made somewhat odder and less reliable by the fact that Niemeyer & Musch failed to include any data from meta-analyses by the same Cochrane group examining symptoms (Jones et al 2004) or indeed other meta-analyses such as that by Rector and Beck (2002) or the UK NICE Committee (2009)

Anyway, to conclude, the bias analysis of CBT for Psychosis by Niemeyer et al is itself biased by unexplained choices made by the reviewers themselves. Not being psychic, I have no idea why they made these choices or what differences it would make to include different measures and additional meta-analyses. One thing I do know, however, is that any claim that bias does not exist in studies examining CBT for a biased and unreliable! Researchers are familiar with the idea of GIGO (Garbage In Garbage Out), which has often been levied at meta-analysis - perhaps we now need to consider METAGIGO!