Although we failed to show a statistically significant effect of the intervention we cannot rule out a beneficial effect of the cognitive therapy on transition rate (although it could be argued that the sample size required to show such an effect, and the small effect sizes reported here, would make such an endeavour unfeasible in practical terms and unwarranted in clinical terms).
We regularly hear scientists complain about the truth gap that exists between their research findings and how science journalists portray those findings ...but what about that void that sometimes opens up between the findings of scientists and their own conclusions?
A case in point is a paper by Morrison et al (2012) published 'open access' recently in the British Medical Journal. The paper examines the use of Cognitive Therapy in individuals at High-Risk for developing psychosis. High-risk is a particularly contentious and hot topic in the context of DSM-V proposals for early intervention and CBT for psychosis is a similarly contentious topic.
This paper strangely brought to my mind the 80s group Dead or Alive, their one-hit wonder 'You Spin me Round' and the physical transformation of their singer Pete Burns...perhaps you will see why
Figure 1. Still spinning - Pete Burns from the 1980s group Dead or Alive -
before and after a CBT makeover
before and after a CBT makeover
What did Morrison and the 13 co-authors do?
They conducted a multisite single-blind randomised controlled trial of Cognitive Therapy in 288 individuals thought to be at high-risk for psychosis (aged between 14 and 35). Participants were randomly assigned to either Cognitive Therapy sessions over six months (plus monitoring of mental state i.e. provision of warm, empathic, and non-judgmental face-to-face contact and supportive listening) or monitoring of mental state only.
Results
- Overall, the prevalence of transition to psychosis was lower than expected (23/288; 8%), but crucially, no significant difference in transition rates emerged between the CT and control group
- Distress from psychotic symptoms did not differ
- but symptom severity was significantly reduced in the CT group (P=0.018)
A few thoughts about the study
1) Transition rates Cognitive Therapy has no impact on transition rates
The low rate misses 29 individuals who were excluded at a second baseline assessment because they were deemed to have developed psychosis. The authors say...
2) Reducing Symptoms The authors' claim that CT reduced symptom ratings is itself also questionable. The interaction between group and time did not reach significant and so it could be reasonably argued that CT had no greater impact than the control condition across time.
Moreover, as the authors themselves note, any effect "...would become of borderline significance if we were to adjust for multiple testing using a Bonferroni correction (there being three primary outcomes)." This is interesting because the primary outcome of the EDIE study protocol is transition and nothing is mentioned about symptom reduction as a primary outcome.
Who is spinning - Media, journal or authors?
The low rate misses 29 individuals who were excluded at a second baseline assessment because they were deemed to have developed psychosis. The authors say...
This was a cautious manoeuvre to eliminate the possibility that such individuals were under-reporting at baseline and were, therefore, appropriately excluded on the basis of psychosis at first baseline assessment. … The inclusion of these 29 would have given an overall transition rate of 18%, which is similar to that found in the recent cohort studies
2) Reducing Symptoms The authors' claim that CT reduced symptom ratings is itself also questionable. The interaction between group and time did not reach significant and so it could be reasonably argued that CT had no greater impact than the control condition across time.
Moreover, as the authors themselves note, any effect "...would become of borderline significance if we were to adjust for multiple testing using a Bonferroni correction (there being three primary outcomes)." This is interesting because the primary outcome of the EDIE study protocol is transition and nothing is mentioned about symptom reduction as a primary outcome.
Who is spinning - Media, journal or authors?
Despite the authors not being able to make any claims about CT positively affecting transition rates (which we have seen should be more than doubled), the lack of difference between CT and monitoring alone, the unreliable change in symptom scores, the lack of any medication analysis (in fact all patients were unmedicated as an entry requirement) - they conclude the following
"On the basis of low transition rates, high responsiveness to simple interventions such as monitoring, a specific effect of cognitive therapy on the severity of psychotic symptoms, and the toxicity associated with antipsychotic drugs, we would suggest that antipsychotics are not delivered as a first line treatment to people meeting the criteria for being in an at risk mental state"
So the article in the UK Guardian entitled Drugs not best option for people at risk of psychosis, study warns is not simply misunderstanding by a journalist, but what looks like author spinning. Indeed, as far as I am aware, no direct comparison between CBT and medication has ever been published!
Focusing on different angle, The BMJ press release itself is headlined Cognitive therapy helps reduce severity of distress among psychotic patients - even though the paper (and the press release itself!) clearly states:
But the confusion arises because the authors also proppose thatCognitive therapy did not significantly affect distress related to these psychotic experiences...nor levels of depression, social anxiety, or satisfaction with life...
How authors may conclude the efficacy of any intervention (CT or monitoring) is beyond me when they have no control group as such! Who knows what would have happened with no intervention at all?...it is clear that both groups improved significantly in terms of psychotic experiences, distress, anxiety, and depression...
So, this paper gave me a 'mind-pop' for the 80s band Dead or Alive, their hit 'You Spin me Round' and the wonderfully eccentric self-transforming Pete Burns. In fact, this tune has been reinvented many times by Pete Burns and others (e.g. Marilyn Manson) and like Cognitive Therapy, will no doubt continue to reinvent itself
...perhaps some papers remind you of pop tunes or musicans although I wouldnt be surprised to be in a minority of one!
References
Morrison AP, Stewart SL, French P, Bentall RP, Birchwood M, Byrne R, Davies LM, Fowler D, Gumley AI, Jones PB, Lewis SW, Murray GK, Patterson P, Dunn G. Early detection and intervention evaluation for people at risk of psychosis: multisite randomised controlled trial. British Medical Journal, 344
Apart from this latest study showing CBT is NBG for psychosis, the Cochrane Collaboration have just published their updated systematic review of CBT vs other psychological interventions. Their stark conclusion is'Trial-based evidence suggests no clear and convincing advantage for cognitive behavioural therapy over other - and sometime much less sophisticated - therapies for people with schizophrenia.'
ReplyDeleteIs this is the death knell for CBT for psychosis? How can it survive such a verdict?
Dear Keith
ReplyDeleteThere is a trial comparing CBT to antipsychotics (risperidone): http://www.ncbi.nlm.nih.gov/pubmed/21034687
No difference at 6-months and the 12-month outcomes (published in abstract form) again show no difference, with 10% making transition in both arms. Those receiving supportive therapy plus placebo had a rate of around 20%. Not significant, but this may be a power issue.
Thanks
Paul Hutton
Dear Peter
ReplyDeleteI would hold back on reading too much into the Cochrane review. There are problems with it. Remember, Cochrane recently said risperidone wasn't very effective. For a critical review of this and other drug reviews, see here: http://www.ncbi.nlm.nih.gov/pubmed/22486554
The main issue with their CBT review (and, with respect, yours and Keith Laws) is the failure to consider optimal dose. If I were reviewing quetiapine, I wouldn't pool data from a 75mg arm with a 400mg arm. Likewise, pooling data from a study of CBT lasting a couple of months (e.g., socrates) with one lasting over 6 months is not helpful either.
Whether you are an advocate of CBT or not, the truth is we need much better studies. There has been an understandable desire to demonstrate effectiveness prior to efficacy, meaning small studies with heterogenous patients have been carried out. You wouldn't catch a pharma company doing this. But pharma have lots of money, while we rely on very limited NHS funding - if we are lucky. This is probably why the better designed trials have been so short in duration (e.g., socrates). It also influences inclusion and exclusion criteria. For instance, CBT might work better than non-specific treatments for the relatively small group of people 'ready-to-change', as it were. In my experience such a group, while small in numbers, are much easier to help - i.e., they turn up. However, with time-limited funding, you simply cannot have such tight inclusion criteria. You have to take everyone, meaning you end up with a bunch of people in both arms of your study not getting better - thus limiting power.
So we need bigger studies with substantial financial support. We also need studies adequately powered to test a-priori hypotheses about who is likely to benefit from CBT. Clearly, it does not work for everyone.
Thanks
Paul Hutton
Another problem with the CBT trials is that they often fail to report response or remission rates
I don't think that it's a real problem that CBT studies don't repost response/ remission rates. It' proven that the biggest problem is that CBT researchers don't/ can't publish their null results. That's the BIG problem, especially with the CBT studies: the positive publication bias. Studies on psychology/ psychotherapie are the champions in that. Studies like 'Adverse Effects of Cognitive Behavioral Therapy and Cognitive Remediation in Schizophrenia (doi: 10.1097/NMD.0b013e31825bfa1d)' are really rare, but they show us very important information that, most of the times, is systematically absent.
ReplyDeleteAnd another problem is the fact that CBT studies, especially in GB, are funded (maybe not enough) from governmental organisations. While research of other bonafide therapies are not (as many) sponsored by the government. They have to fund themselves. The result of that is that studies of other therapies (none- CBT) have far less possibility to conduct good studies (however, they have good studies, they publish them, but there are less studies). And, there are enough meta-analysis that show that other forms of bonafide therapies have the same results of CBT, sometimes they are better than CBT, sometimes they are a bit worse than CBT. But the conclusion of CBT-researchers becomes: CBT is more effective then other forms of psychotherapy. While the only correct conclusion can be: CBT has more studies than other bonafide therapies, CBT works, but there is little or no evindece that CBT works better than other bonafide therapies.
Another problem of the CBT-studies is that when they compare CBT with another form of therapy, the researchers choose to much for a caricature of the 'other' therapy, especially by choosing a CBT therapist who is doing the 'other' therapy. And of course, like that, the validity of the study can be questioned.
Another problem of the therapy research is, of course, the researcher allegation. If studies are corrected for that, there is almost no shown difference anymore between the effectivity of a lot of CBT studies and other boafide therapies.
Also a problem: RCT's are not designed for psychotherapy research, they are designed for research in medicine, where it is at least possible to work (double) blind. If working like that isn't possible, research is seriously flawed.
And I still can go on: the replication of many psychology studies is not easy (or simply: other researchers frequently don't succeed in the replication of psych. studies). Or: after all these years of research in neuroscience, neuroscience couldn't create any (hard) objective measures. None of the diagnoses in the DSM can be 'proven' by an objective parameter. Even the methods neuroscience use, are flawed thanks to the many flaws for making this machines work (like a MRI, cut-off,),...
So I'm afraid we can state that the past research of psychology/ psychotherapy/ psychiatry is getting really questionable. If that doesn't change fast, we will not be taken seriously anymore for a reason I can understand very well. And the gap between professionals and researchers will expand even more because the research is simply not reliable. Doing our best is necessary, but it's not enough!
"It' proven that the biggest problem is that CBT researchers don't/ can't publish their null results. That's the BIG problem, especially with the CBT studies: the positive publication bias."
ReplyDeleteIf you mean CBT for psychosis studies, then this has just been assessed - and found to not be true:
http://www.ncbi.nlm.nih.gov/pubmed/22484024
Paul Hutton
Hi Paul (and Anonymous)
DeleteRe bias in publication, as Niemeyer et al (http://www.ncbi.nlm.nih.gov/pubmed/22484024) state, they found "moderate evidence for the presence of publication bias"
So, some bias does exist (as it does in most literatures to greater or lesser extents) - see my latest post http://keithsneuroblog.blogspot.co.uk/2012/07/negativland-what-to-do-about-negative.html
I dont think publication bias is the major problem in this area -far more pernicious is the lack of blinding at assessment outcome in many CBT for psychosis studies
Hi Paul,
ReplyDeleteNo, I didn't (especially) mean CBT for psychosis studies. I had other diseases in mind (like depression: a study of Pim Cuypers). I realise that this is a bit confusing, because this topic is about psychosis.
And, by the way, In the abstract of the study you refer to, it's not only about CBT, it's about 'psychotherapeutic interventions for Schizophrenia'?
Dear Keith and Anon
ReplyDeleteThe article examined several different psychosocial approaches for schizophrenia, one of which was CBT. The overall conclusion in the article was:
"In summary, there was thus very little evidence for a selective
publication of positive results. The validity of the data sets that were tested and found to be not significantly affected by publication bias is enhanced. We can conclude that the minor tendency toward selective publication of positive results hardly changes the assessment of the efficacy for such interventions as CBT, family therapy, and psychoeducation."
So, there might be some missing studies, but this hasn't led to a particular bias, given imputing them hardly changes the overall conclusions.
With respect to blinding, I agree this is an important issue potentially leading to overestimation of effect. But there are many factors potentially leading to underestimation of effect as well. Antipsychotic studies sacrifice generalisability for the sake of signal detection (e.g., excluding people who haven't responded to antipsychotics in past, and excluding placebo-responders), while CBT studies often seem to do the opposite.
But we do need to have a debate on efficacy of CBT for psychosis. If it works at all, then we need to know who for. This will encourage the community to develop better treatments for those remaining. We also need to know more about optimal dose and adverse effects. But who will fund these studies? AstraZeneca?
Paul Hutton
Hi Paul
DeleteIt has been well-documented by us (and others) that blinding has an extremely large effect on CBT for psychosis. We (Lynch, Laws & McKenna 2010 https://uhra.herts.ac.uk/dspace/bitstream/2299/5741/3/903449.pdf)
found a zero Effect Size in blind studies and .6 for non-blind studies - this is striking and undermining for CBT in psychosis.
By contrast, you intriguingly argue "But there are many factors potentially leading to underestimation of effect as well." What 'many factors' were you contemplating and crucially, which (if any?) have been empirically shown to underestimate the effect size in CBT for psychosis?
Dear Keith,
DeleteBut 0.6 is moderate-large, not extremely large, surely? It is also based on 2 studies in one analysis.
There are other features of those studies worth considering. First, they are the 2 smallest studies, and small studies generally have larger effects. Second, they are the 2 oldest studies, and effect sizes for treatments in schizophrenia have reduced over time. No-one really knows why. The drug people are particularly anxious about this, I’m told.
The other problem here though is that the analysis you did is quite difficult to interpret. You pooled data from various outcomes and concluded that studies previously thought to be promising were actually ineffective (e.g., Durham). I haven’t seen that done before (or since) for schizophrenia treatments!
Anyway, I agree a large part of the increased ES probably is related to lack of blinding, not least because Wykes et al reported the same effect in their previous review, but I added the word 'potentially' to my previous comment because I try not to jump to conclusions based on retrospective analyses of a limited number of studies. Many see meta-analysis as hypothesis-generating, rather than definitive, particularly given the multiple number of confounding variables and the fact that researchers rarely publish their intentions prior to seeing the data. Obviously an issue if the researcher is gunning for a particular outcome.
Are there other factors which might *potentially* underestimate effects? The Wykes review identified 'clinical emphasis' as one - the less behavioural, the less effective. Lack of a dedicated trial therapist also emerged as a moderator (http://centaur.reading.ac.uk/24693/2/CBT_for_schizophrenia_and_therapist_effects.pdf). That trials often lack dedicated therapists might reflect the relatively low rates of funding they receive.
Other issues might be dose of CBT (number of sessions, or weeks available to deliver sessions). It would have been interesting if you or the Cochrane authors had examined excluding short duration studies, in addition to excluding pilot studies, small studies, studies with >20% schizoaffective disorder, hospitalisation rates and so on.
Another major difference between the CBT studies and the drug studies is that the latter always analyse mean change scores, not endpoint scores as do the CBT trials. There are a couple of studies (I think it’s the Durham and Sensky trials) where endpoint scores appear equivocal or favour control, but mainly because the CBT group had higher scores at baseline for some reason. If you look at mean change, the effect sizes change direction. I’m guessing you used the endpoint data in your analysis, whereas Leucht and colleagues in their analysis of antipsychotic drugs vs placebo prioritised mean change (endpoint scores were at the bottom of their data extraction hierarchy).
I’ve discussed the heterogeneity of participant issue already. This is just a theory - difficult to examine this empirically with meta-analysis, unfortunately.
Paul
Dear bloggers
ReplyDeleteSome posts ago, Paul Hutton argued that some of the weakness of the findings of studies of CBT for psychosis might be due to constraints on the size of the studies that can be realistically carried out. For example, he states: ‘ Whether you are an advocate of CBT or not, the truth is we need much better studies'....'small studies with heterogenous patients have been carried out'....'we rely on very limited NHS funding'....'So we need bigger studies with substantial financial support.'
In fact, there have been several large trials of CBT which have received major funding from UK grant giving bodies. These include:
1. A multicentre, randomised controlled trial of cognitive therapy to reduce harmful compliance with command hallucinations (PI M Birchwood), 180 participants (target number), funded by the MRC-(£1.1M)
2. Early Detection and Intervention Evaluation for individuals at high risk of psychosis (EDIE 2)(PI A Morrison), 288 participants.
funded by the MRC (£1.1M) with additional funding from the Department of Health.
3. An evaluation of motivational (MI) plus cognitive therapy (CBT) for schizophrenia and substance misuse (PI C Barrowclough), 327 participants, funding from the MRC (£1.9M).
4. Study Of Cognitive Realignment Therapy in Early Schizophrenia (SOCRATES) (PI S Lewis et al), 315 participants, funding from MRC (amount unknown) plus additional funding from the NHS.
5. Psychological Prevention of Relapse in Psychosis (PRP)(PI P Garety), 301 participants, funding from the Wellcome Trust (amount uncertain, seems to have been part of a grant totalling £1.6M).
6. A Randomized Controlled Trial of Cognitive-Behavioral Therapy for Persistent Symptoms in Schizophrenia Resistant to medication (PI Sensky/Turkington), 90 participants, funding from the Wellcome Trust (amount unknown).
This list does not include a trial by Turkington et al (2002) on 422 participants which received funding from Pfizer. Nor does it include three large European trials (Klingberg,TONES study 198 participants, Klingberg POSITIVE study, target 330 participants, van der Gaag, 216 participants). There have also been non health-service-funded American and Australian trials with numbers in the range of 50-100 participants(eg Grant et al (2011) 60 participants, Jackson et al (2005), 91 participants).
The reality is that something north of £10 million of taxpayers' money has already been spent on investigating CBT in schizophrenia in Europe. More applications for large, well-controlled studies are on the way, unchecked by the flow of negative findings.
Peter McKenna
Dear Peter McKenna
ReplyDeleteThanks for your comment. However you have attempted to refute a point I was not making.
I was clearly arguing that we need larger trials *with* less heterogeneous patients *when* comparing CBT to control treatments - i.e., the studies assessed in the Cochrane review we were talking about.
As you and Keith Laws will know, a small trial with heterogeneous patients will struggle to detect small but meaningful effect size advantages for CBT. This when comparing CBT to treatments that we know service users get some benefit from. In your review 6 out of the 9 included studies had less than 35 patients in each arm, meaning their power to detect an ES of around 0.3 is around 20%. What sample size would give adequate power to detect such an effect?
[Before someone says “so what – who cares about 0.3?”, consider that drug treatment has an ES of around 0.5 over placebo in efficacy trials, and a within-group ES of around 0.3-0.5 in effectiveness trials]
You present a list of studies - presumably to refute the claim that adequate trials have not been carried out. But this does not address my point. In fact, only 2 of the 12 you list are (a) published, (b) N>70 and (c) compare CBT for established psychosis to placebo treatments rather than TAU or another sophisticated treatment (e.g., cog remediation, family therapy): Socrates (5 weeks of therapy) and Sensky et al (36 weeks of therapy).
The results from Sensky, but not Socrates, are favourable to CBT. Results are even more favourable if you look at those having >50% reduction in CPRS scores, something you did not include in your meta-analysis.
You then outline the amount of research funding spent on a range of CBT trials, but you fail to set this in context by considering the amount of money that goes on researching other treatments.
Finally, you bemoan the fact that 10 million+ has been spent on trying to develop effective yet benign treatments for severe mental illness, forgetting that access to such treatment is precisely what service users say they want:
http://www.cqc.org.uk/node/388516
Paul Hutton
Thanks to Paul Hutton's detailed analysis, I think we are now in a position to articulate some basic principles of bistromathics as applied to CBT for schizophrenia:
Delete1. It is not possible for a negative finding and a conclusion that CBT is ineffective to occupy the same research article. This also applies to reviews, editorials and commentaries by the authors of these articles.
2. Research studies and meta-analyses show entanglement. Specifically, as more individual studies have negative findings, meta-analyses come to increasingly robust conclusions that CBT is effective. Conversely, if a meta-analysis (such as the 2012 Cochrane review) comes to negative conclusions, this means that a large, controlled study with positive findings will come into existence in the near future.
3. As the number of negative findings accumulate, it becomes harder and harder to prove that CBT does not work. A point where CBT would be considered definitely ineffective can never be reached, because the research funding required would become infinite.
There may be still more principles out there...
Peter McKenna
Dear Peter
ReplyDeleteWhy don't you engage with the actual arguments?
Much depends on whether you are interested in finding out whether CBT does work, or whether you are interested in simply proving it doesn't.
All the best
Paul Hutton