Friday, 6 June 2014

Meta-Matic: Meta-Analyses of CBT for Psychosis




Meta analyses are not a 'ready-to-eat' dish that necessarily satisfy our desire for 'knowledge' - they require as much inspection as any primary data paper and indeed, afford closer inspection...as we have access to all of the data. Since the turn of the year, 5 meta-analyses have examined Cognitive Behavioural Therapy (CBT) for schizophrenia and psychosis. The new year started with the publication of our meta analysis (Jauhar et al 2014) and it has received some comment on the BJP website, which I wholly encourage; however the 4 further meta-analyses in 4 last months have received little or no commentary...so, I will briefly offer my own.

Slow Motion (Ultravox)


1)      Turner, van der Gaag, Karyotaki & Cuijpers (2014) Psychological Interventions for Psychosis: A Meta-Analysis of Comparative Outcome Studies

Turner et al assessed 48 Randomised Controlled Trials (RCTs) involving 6 psychological interventions for psychosis (e.g. befriending, supportive counselling, cognitive remediation); and found CBT was significantly more efficacious than other interventions (pooled together) in reducing positive symptoms and overall symptoms (g= 0.16 [95%CI 0.04 to 0.28 for both]), but not for negative symptoms (g= 0.04 [95%CI -.09 to 0.16]) of psychosis

The one small effect described by Turner et al as robust - for positive symptoms - however became nonsignificant when researcher allegiance was assessed. Turner et al rated each study for allegiance bias along several dimensions, and essentially CBT only reduced symptoms when researchers had a clear allegiance bias in favour of CBT - and this bias occurred in over 75% of CBT studies.

Comments:
One included study (Barretto et al) did not meet Turner et als own inclusion criteria of random assignment. Barretto et al state "The main limitations of this study are ...this trial was not truly randomized(p.867). Rather, patients were consecutively assigned to groups and differed on baseline background variables such as age of onset being 5 years earlier in controls than the CBT group (18 vs 23). Crucially, some effect sizes in the Barretto study were large (approx. 1.00 for PANNS total and for BPRS). Being non-random, it should be excluded and with 95% Confidence Intervals hovering so close to zero, this makes an big difference - I shall return to this Barretto study again below

Translucence (Harold Budd & John Foxx)


2)  Burns, Erickson & Brenner (2014) Cognitive Behavioural Therapy for medication-resistant psychosis: a meta analytic review

Burns et al examined CBT’s effectiveness in outpatients with medication-resistant psychosis, both at treatment completion and at follow-up. They located 16 published articles describing 12 RCTs. Significant effects of CBT were found at post-treatment for positive symptoms (Hedges’ g=.47 [95%CI 0.27 to 0.67]) and for general symptoms (Hedges’ g=.52 [95%CI 0.35 to 0.70]). These effects were maintained at follow-up for both positive and general symptoms (Hedges’ g=.41 [95%CI 0.20 to 0.61] and .40 [95%CI 0.20 to 0.60], respectively).

Comment
Wait a moment.... what effect size is being calculated here? Unlike all other CBT for psychosis meta analyses, these authors attempt to assess pre-postest change rather than the usual end-point differences between groups. Crucially - though not stated in the paper - the change effect size was calculated by subtracting the baseline and endpoint symptom means and then dividing by ...the pooled *endpoint* standard deviation (and not, as we might expect, the pooled 'change SD'). It is difficult to know what such a metric means, but the effect sizes reported by Burns et al clearly cannot be referenced to any other meta-analyses or the usual metrics of small, medium and large effects (pace Cohen).

This meta analysis also included the non-random Barretto et al trial, which again is contrary to the inclusion criteria for this meta analysis; and crucially, Barretto produced - by far - the largest effect size for general psychotic symptoms in this unusual analysis (See forest plot below).

Burns1

 
\

van der Gaag et al examined end-of-treatment effects of individually tailored case-formulation CBT on delusions and auditory hallucinations. They examined 18 studies with symptom specific outcome measures. Statistically significant effect-sizes were 0.36 for delusions and 0.44 for hallucinations. When compared to active treatment, CBT for delusions lost statistical significance (0.33), though CBT for hallucinations remained   significant(0.49). Blinded studies reduced the effect-size in delusions by almost a third (0.24) but unexpectedly had no impact on effect size for hallucinations (0.46).


Comment
van der Gaag et al state they excluded studies that "...were not CBTp but other interventions (Chadwick et al., 2009; Shawyer et al., 2012; van der Gaag et al., 2012). Shawyer et al is an interesting example as Shawyer and colleagues recognize it as CBT, stating “The purpose of this trial was to evaluate...CBT augmented with acceptance-based strategies" The study also met the criterion of being individual and formulation based.

More importantly, clear inconsistency emerges as Shawyer et al was counted as CBT in two other 2014 meta analysis where van der Gaag is one of the authors. One is the Turner et al meta analysis (described above) where they even classified it as having CBT allegiance bias - see below far right classification in Turner et al)

shawyerfig

And ....Shawyer et al is further included in a 3rd meta-analysis of CBT for negative symptoms by Velthorst et al (described below), where both van der Gaag & Smit are 2 of the 3 co-authors.

So, some of the same authors considered a study to be CBT in two meta-analyses, but not in a third. Interestingly, the exclusion of Shawyer et al is important because they showed that befriending significantly outperformed CBT in its impact on hallucinations. The effect sizes reported by Shawyer et al themselves at end of treatment for blind assessment (PSYRATS) gives advantages of befriending over CBT to the tune of 0.37 and 0.52; and also for distress for command hallucinations at 0.40



While the exclusion of Shawyer et al seems inexplicable, inclusion of Leff et al (2013) as an example of CBT is highly questionable. Leff et al refers to the recent 'Avatar therapy' study and at no place does it even mention CBT. Indeed, in referring to Avatar therapy, Leff himself states that he "jettisoned some strategies borrowed from Cognitive Behaviour Therapy, and developed some new ones"

And Finally...the endpoint CBT advantage of 0.47 for hallucinations in the recent unmedicated psychosis study by Morrison et al (2014) overlooks the fact that precisely this magnitude of CBT advantage existed at baseline i.e. before the trial began...and so, does not represent any CBT group improvement, but a pre-existing group difference in favour of CBT!

Removing the large effect size of .99 for Leff and the inclusion of Shawyer et al with a negative effect size of over .5 would clearly alter the picture, as would recognition that the patients receiving CBT in Morrison et al showed no change compared to controls. It would be surprising if the effect then remained significant...

Hiroshima Mon Amour (Ultravox)


4. Velthorst, Koeter, van der Gaag, Nieman, Fett, Smit, Starling Meijer C & de Haan (2014) Adapted cognitive–behavioural therapy required for targeting negative symptoms in schizophrenia: meta-analysis and meta-regression

Velthorst and colleagues located 35 publications covering 30 trials. Their results showed the effect of CBT to be nonsignificant  in alleviating negative symptoms as a secondary [Hedges’ g = 0.093, 95% confidence interval (CI) −0.028 to 0.214, p = 0.130] or primary outcomes (Hedges’ g = 0.157, 95% CI −0.10 to 0.409, p = 0.225). Meta-regression revealed that stronger treatment effects were associated with earlier year of publication, lower study quality.

Comment
Aside from the lack of significant effect, the main findings of this study were that the large effect size of early studies has massively shrunken and reflects the increasing quality of later studies e.g. more blind assessments.

Finally, as Velthorst et al note, the presence of adverse effects of CBT - this is most clearly visible if we look at the forest plot below - where 13 of last 21 studies (62%) show a greater reduction of negative symptoms in the Treatment as Usual group!

negedit





Friday, 21 February 2014

The Farcial Arts: Tales of Science, Boxing, & Decay




"Do you think a zebra is a white animal with black stripes, or a black animal with white stripes?"
 

Why do researchers squabble so much? Sarah Knowles recently posted this interesting question on her blog - it was entitled Find the Gap. The debate arose in the context of the new Lancet paper by Morrison et al looking at the efficacy of CBT in unmedicated psychosis. I would advise taking a look at the post, plus the comments raise additional provocative ideas (some of which I disagree with) about how criticism in science should be conducted.

So, why do researchers squabble so much? First I would replace the pejorative squabble with the less loaded argue. In my view, it is their job to argue...as much as it is the job of politicians to argue, husbands and wives, children, everyone- at least in a democracy! Our 'science of the mind', like politics gives few definitive answers and .... so, we see a lot of argument and few knock-out blows.

'I am Catweazle' by Luke Haines
 What you see before you is a version of something that may be true...
I am Catweazle, who are you?

But we might ask - why do scientists - and psychologists in particular - rarely land knock-out blows? To carry the boxing analogy forward...one reason is that opponents 'cover up', they naturally defend themselves; and to mix metaphors, some may even "park the bus in front of the goal".

And like boxing...science may sometimes seem to be about bravado. Some make claims outside of the peer-reviewed ring with such boldism that they are rarely questioned, while others make ad hominem attacks from the sidelines ...preferring to troll behind a mask of anonymity - more super-zero than super-hero.


A is for Angel Fish  

Some prefer shadow boxing - possibly like clinicians carrying on their practice paying little heed to the squabbles or possibly, even the science. For example, some clinicans claim that the evidence regarding whether CBT reduces the symptoms of psychosis is irrelevant since - in practice - they work on something they call distress (despite its being non-evidenced). Such shadow boxing helps keep you fit, but does not address your true strength - you can never know how strong your intervention is ...until its pitted against a worthy opponent, as opposed to your own shadow!

 

Despite this, the clashes do emerge between science and practice. Many fondly remember Muhammad Ali and his great fights against the likes of Joe Frazier (less attractive, didn't float like a butterfly). Fewer recall Ali's post-retirement battles including with the professional wrestler - Inoki - not a fair fight, not a nice spectacle and not decisive about anything at all - this is like the arguments between scientists and practitioners - they have different paradigms, aims and languages, with probably modest overlap - often a no-contest.

Race for the Prize by Flaming Lips
 .

Is 'Normal Science' all about fixed bouts?
We should acknowledge that some bouts are 'fixed', with some judges being biased toward one of the opponents. Again in science, this may happen in contests between an established intervention (e.g. CBT for psychosis, anti-psychotic medication etc) and those arguing that the intervention is not so convincing. Judges are quite likely to be advocates of the traditional therapy, or at least the status quo - this is a part of Kuhnian normal science - most people have trained and work within the paradigm, ignoring the problems until they escalate, finally leading to replacement (paradigm shift). These changes do not occur from knock-out blows but from a war of attrition, with researchers hunkered down in the trenches possibly advancing and retreating yards over years. What this means is that its hard to defeat an established opponent - unseating an aging champion requires much greater effort than simply protecting that champion


This is Hardcore - Pulp
I've seen the storyline played out so many times before.
Oh that goes in there.
Then that goes in there.
Then that goes in there.
Then that goes in there. & then it's over.

Monster-Barring: Protective Ad Hoc Championship Belts
Returning to CBT for psychosis, nobody should expect advocates to throw in the towel - that is not how science progresses. Rather, as the philosopher of science Imre Lakatos argues, we would expect them to start barricading against attacks with their protective belt - adding new layers of ad hoc defence to the core ideas. Adjustments that simply maintain the 'hard core', however, will highlight the research programme as degenerative.
 
 
Not a leg to stand on

Of course, the nature of a paradigm in crisis is that ad hoc defences emerge inluding examples of what Lakatos calls 'monster barring'. To take an example, CBT for psychosis advocates have seen it as applicable to all people with a schizophrenia diagnosis and when this is found wanting, the new position becomes: schizophrenia is heterogeneous and we need to determine for whom it works- monster barring protects the hypothesis against counter-examples by making exceptions (not tested and evidenced of course). This could go on indefinitely of course: CBT must be delivered by an expert with x years training; CBT works when used in the clinic; CBT works for those individuals rated suitable for CBT ...ad infinitum...What happens ultimately is that people lose faith, break ranks, become quiet deserters, join new ascending faiths - nobody wants to stay on a losing team.


Although sometimes, like Sylvester Stallone, scientific ideas make a come-back...spirits raise and everyone gets hopeful again, but secretly we all know that comebacks follow a law of diminishing returns and with the prospect that holding on for too long comes increased potential for... harm. A degenerative research program may be harmful because it is a waste of time and resources, because it offers false hope, and because it diverts intelligent minds and funds away from the development of alternatives with greater potential.

"If even in science there is no a way of judging a theory but by assessing the number, faith and vocal energy of its supporters, then this must be even more so in the social sciences: truth lies in power." Imre Lakatos

Queensbury rules
All core beliefs have some acceptable protection, the equivalent of gum shields and a 'box' I suppose, but some want to enter the ring wearing a suit of armour - here I will briefly mention Richard Bentall's idea of rotten cherry picking which emereged in the comments of the Find the Gap blog. Professor Bentall argues that as researchers can cherry pick analyses (if they dont register those analyses), critics can rotten cherry pick their criticisms, focusing on things that he declares... suit their negative agenda. In essence, he seems to suggest that we ought to define what is acceptable criticism on the basis of what the authors declare as admissible! I have already commented on this idea in the Find the Gap post. Needless to say, in science as in boxing, you cannot be both a participant and the referee!


Spectator sport
Some love nothing more than the Twitter/blog spectacle of two individuals intellectually thumping each other. But for others, just like boxing, science can seem unedifying (a point not lost on some service users ). Not everybody likes boxing, and not everybody likes the way that science operates, but both are competitive and unlike Alice in Wonderland, not everyone is a 'winner', but then even the apparent losers often never disappear....thus is the farcial arts.

Thursday, 6 February 2014

My Bloody Valentine: CBT for unmedicated psychosis

 
When I critiqued Morrison et als exploratory CBT trial with people who stop taking anti-psychotic medication, I promised to write a post on the final study
 
Well it appeared in the Lancet today and a free copy is here. I am not going to describe the study in detail as it is excellently covered in the Mental Elf blog today. Contrary to the fanfare of glowing comments by highly respected schizophrenia/psychosis researchers, I think the paper has so many issues that I may need to write a second post. But I'm keeping it simple here to concentrate on the primary outcome data - symptom change scores on the PANSS.

'Soon' by My Bloody Valentine (Andy Weatherall mix)


The study examines schizophrenia patients who have decided not to take anti-psychotic medications; 37 were randomly assigned to 9 months CBT and 37 assigned to - what the authors call TAU (but is obviously quite unusual...in an important manner that will become clear below)

What do the primary outcome PANSS scores (total, positive and negative symptoms) reveal?

Table 1. PANSS scores during the intervention (up to 9 months) and follow ups to 18 months

 
The key questions are:
Do the CBT and TAU groups differ in PANSS scores at the end of the intervention (9 months) and at the end of the study (18 months)? One simple way to address both questions is to calculate the Effect Sizes at 9 months and at 18 months.

9 months
PANSS total       =  -0.37   (95 CI -0.96 to 0.22)
PANSS positive  =  -0.18  (95 CI -0.77 to 0.40)
PANSS negative =  -0.45  (95 CI -1.04 to 0.14)
 
Examination of effect sizes at the end of the intervention (9 months) reveals that CBT and TAU groups do not differ significantly on any of the three primary outcome measures at the end of intervention (i.e. all CIs cross zero)

18 months
PANSS positive is nonsignificant, while PANSS total and PANSS  negative effect sizes are moderately sized, the lower end CIs are very close to zero (at -0.05 and -0.08) suggesting marginal significance
 
18 months
PANSS total = -0.75 (95 CI -1.44 to -0.05)
PANSS positive = -0.61 (95 CI -1.27 to 0.05)
PANSS negative = -0.45 (95 CI -1.47 to -0.08)

A closer inspection of the means shows that the significant differences at 18 months almost certainly reflects an increase in symptom scores for the TAU group rather than a decrease for the CBT group (compare CBT at 9 and 18 months and TAU at 9 and 18 months)


My final and crucial point concerns within group symptom reduction
Table 2 shows the baseline PANSS scores on primary outcome measures and its informative to compare change from baseline within each group (CBT and control)
 
Table 2. PANSS scores at baseline
 

If we compare baseline and the end of the intervention 9 months:

PANSS total
CBT group show a reduction from 70.24 to 57.95 =12.29 
TAU group show a reduction from 73.27 to 63.26 =10.01

PANSS positive
CBT group show a reduction from 20.30 to 16.0 =4.30
TAU group show a reduction from 21.65 to 17.0 = 4.65

PANSS negative
CBT group show a reduction from 13.54 to 12.50 = 1.04
TAU group show a reduction from 15.49 to 14.26 = 1.23

So, after 9 months of intensive CBT intervention, controls - who don't even receive a placebo - show a greater reduction in positive and negative symptoms !

Moreover, the 'natural' reduction shown at 9 months by TAU is as large as the reduction shown by the CBT group at the very end of the trial (18 months: PANSS total =13.77; PANSS pos 5.67 and PANSS neg 1.01) - no significant difference exists between TAU reduction at 9 months and CBT reduction at 9 or 18 months

What then have Morrison et al shown?
I would argue that their data show, for the first time, how patients who choose to be unmedicated display fluctuations in symptomatology (as we might expect given they are unmedicated) ...but crucially, these fluctuations are as large as the changes seen in the CBT group. Hence, it is reasonable to ask...have Morrison et al simply documented 'normal fluctuation' in the symptomatology of unmedicated patients ...and nothing to do with CBT

Wednesday, 29 January 2014

Blinded by Science




"The New Year starts with a test of an established tenet of treatment in schizophrenia."

Its not often that we hear such phrases, but thus opens the 'highlights' section of the latest edition of the British Journal of Psychiatry, referring to our new meta-analysis examining Cognitive Behaviour Therapy (CBT) for the symptoms of Schizophrenia. This is the most comprehensive analysis ever undertaken, covering 50 Randomised Controlled Trials (RCTs) of this 'talk therapy' published over the past 20 years. The paper received press coverage and is, of course available for subscribers at the British Journal of Psychiatry, but I would like to give an overview for the interested lay reader, service-users or anyone who cannot access the journal.


Forbidden Colours (Sakamoto & Sylvian)
I’ll go walking in circles
While doubting the very ground beneath me
Trying to show unquestioning faith in everything

 

Looking at all trials regardless of quality, the paper reveals a small effect in terms of CBT reducing the symptoms of schizophrenia: effect sizes being 0.25 for positive and 0.13 for negative symptoms. To put these effect sizes into everyday language - the vast majority of patients in the CBT and control groups fail to differ at the end of the intervention: 82% and 90% of the CBT and control groups overlapped on positive and negative symptoms respectively.

But this is not the end of the story...

Study Quality
Studies vary in their quality (eg. studies with fewer methodological controls are more prone to bias). In this context, we draw attention to 'blinding' or 'masking' i.e. whether the person assessing symptoms at outcome knows if patients did or didn't receive CBT. We found that effect sizes were up to 7 times larger in nonblind than blind studies. And if you assess effect size in blind studies, the small effects totally disappear (see Table 1). In other words, when researchers know if the patients had received CBT, it massively inflates the positivity of the researchers ratings of patient benefit at outcome! In plain language, at the end of trials 94% and 97% of the CBT and control groups overlap on positive and negative symptoms respectively


Table 1. Comparison of effect sizes for blind (high risk) vs nonblind (low risk) studies


Soft by Lemon Jelly (with added "If you leave me now" by Chicago)

 
Whats happening in individual studies: Forest Plots
Forest plots show the effect size in each trial (the filled rectangle). The size of the rectangle represents the size of the sample tested in a study. The horizontal lines represent the 95% confidence intervals for each effect - these essentially tell us about the reliability of the estimated effect; shorter lines indicate that the estimate is more reliable; longer lines, less reliable. You will notice that larger CI lines emerge in studies with smaller samples and vice versa. The key thing to ask is ... Do the 95% CIs in any study cross zero? If they do, then the trial revealed a nonsignificant effect of CBT on symptoms.

Looking at Figure 1, we can see 25 of 33 studies document a non-significant impact of CBT on positive symptoms. Nonetheless, the overall effect across all 33 studies is significant i.e. ES= -.25 (95%CI -.37 to -.13). This reveals several things - that even when 75% of studies are nonsignificant, meta-analysis can produce an overall significant effect.

Figure 1. Forest plot of 33 studies examining the impact of CBT on positive symptoms


The picture for hallucinations is bleaker...with only 4 significant studies ever published

Figure 2 CBT for Hallucinations

And if it could be worse...it is for negative symptoms ...with no significant study since 2003



A few key take-home observations from the forest plots:
Positive symptoms - 25 of 33 (76%) studies are nonsignficant
Negative symptoms - 30 of 34 (88%) studies are nonsignificant
Hallucinations - 11 of 15 (73%) studies are nonsignificant

If anyone is interested in exploring the data and forest plots further, they may do so via a downloadable and interactive database on our website: http://www.cbtinschizophrenia.com/

You Cut Her Hair by Tom McRae

Symptoms or Distress?
One response to me about our paper, from some UK clinical psychologists, has been to say that ...they use CBT not to reduce the symptoms of psychosis, but to reduce the 'distress'. In the context of the clinical guidance provided to UK clinicians by the National Institute of Clinical Excellence (NICE), this response raises interesting questions about the relationship between science and practice.



NICE do state state that CBT be used to reduce distress (see above); however, this is intriguing on multiple levels. First, NICE base their recommendations on the meta-analysis conducted for them by the National Collaborating Centre for Mental Health (NCCMH), in which all of the data examined relates to RCTs aimed at symptom reduction....and not distress

This is perhaps exemplified by the following paragraph from the NICE guide


The NICE guide states that distress is the target, but that outcomes in trials is not distress. Second, some UK clinicians are clearly taking NICE guidance at face value saying they use CBT to 'reduce distress' - this is effectively unevidenced or off-label use of CBT. Third, and crucially, the evidence does not suggest that CBT reduces distress. For example, they refer to Trower et al 2004 as an example - actually, the study shows no benefit of CBT for distress after one year.

Additionally, I would question the reference to CBT improving 'function'  - the meta analysis in 2008 by Wykes et al showed that CBT has no significant impact on functioning in studies meeting their own minimally acceptable study quality. Fifth, they reference Garety et al regarding relapse prevention - our re-analysis of that study actually shows an increase in relapse for the CBT group. And finally, by the time of this NICE document in 2009, NICE had removed insight in psychosis as a target for CBT (following their 2002 recommendations), even though they had no evidence for it in the first place

Hærra by Ásgeir Trausti


These findings create a challenge for the guidance provided by Government organisations (in the UK, this is NICE) who advocate that "CBT be offered to all people with for schizophrenia".

CBT does not reduce positive symptoms, negative symptoms, or hallucinations; it does not prevent relapse, it does not reduce distress, it does not improve functioning, and it does not improve insight. In the paper we therefore call on NICE to reexamine their recommendation- especially as new guidance is due in 2014...in a matter of weeks!