Friday, 6 November 2015

Song for the Siren



 
There is another future waiting there for you
I saw it different, I must admit
I caught a glimpse, I'm going after it
They say people never change, but that's bullshit, they do
Yes I'm changing, can't stop it now
And even if I wanted I wouldn't know how
Another version of myself I think I found, at last
Yes I'm Changing by Tame Impala
 


Some 'follow-up' observations to my earlier 'Thoughts about Holes' post on the PACE follow-up study of Chronic fatigue Syndrome/ME by Sharpe et al 2015. 

To recap, after completing their final outcome assessment, some trial participants were offered an additional PACE therapy..."If they were still unwell, they wanted more treatment, and their PACE trial doctor agreed this was appropriate. The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial.” White et al 2011
I have already commented on some of the issues about how these decisions were made, but here I focus on the Supplementary Material for the paper (see particularly Table C at the bottom of this post) and - what I believe to be some unsupported ...or unsupportable inferences made about the PACE findings recently.

Song to the Siren by Tim Buckely
(from the Monkees TV show)
Did I dream you dreamed about me?
Were you here when I was flotsom?
Now my foolish boat is leaning
Broken lovelorn on your rocks

I will start with three recent quotes making key claims about the success of the PACE follow-up findings and discuss the evidence for each claim.

1) First, in a Mental Elf blog this week, (Sir) Professor Simon Wessely rightly details and praises the benefits of randomised controlled trials (RCT), concluding that PACE matches-up quite well. But to extend Prof Wessely's nautical motif, I'm more interested in how 'HMS PACE' was seduced by the song of the Sirens, forced to abandon methodological rigor on the shallow rocky shores of bias and confound.

Prof Wessely states
"There was no deterioration from the one year gains in patients originally allocated to CBT and GET. Meanwhile those originally allocated to SMC and APT improved so that their outcomes were now similar. What isn’t clear is why. It may be because many had CBT and GET after the trial, but it may not. Whatever the explanation for the convergence , it does seem that CBT and GET accelerate improvement, as the accompanying commentary pointed out (Moylan, 2015)."
It seems to me that specific claims for "no deterioration from the one year gains in patients originally allocated to CBT and GET" might be balanced by the equally valid statement that we also saw "no deterioration from the one year gains in patients originally allocated to SMC and APT".  Almost one-third of the CBT and GET groups did, however, receive additional treatments as did the SMC and APT groups.  As I mentioned in my previous post, the mean group scores at follow-up are now a smorgasbord of PACE interventions, meaning that the group means lack... meaning!

At best, the PACE follow-up data might show that deterioration did not occur in the GET and CBT groups to any greater or lesser extent than it did in the SMC and APT groups. The original groupings effectively no longer exist at follow-up and we should certainly not flit between explanations sometimes based on initial randomised groupings and sometimes based on additional nonrandomised therapies.

In terms of deterioration, we know only one thing - one group were close to showing a significantly greater number of patients reporting 'negative change' during follow-up and contrary to the claim, this was the CBT group (see Table D in supplementary materials)

Yes I'm Changing by Tame Impala

2) Second, in the abstract of the PACE paper by Sharpe et al draw conclusions about long-term benefits of the original therapy groups:
"The beneficial effects of CBT and GET seen at 1 year were maintained at long-term follow-up a median of 2·5 years after randomisation. Outcomes with SMC alone or APT improved from the 1 year outcome and were similar to CBT and GET at long-term follow-up, but these data should be interpreted in the context of additional therapies having being given according to physician choice and patient preference after the 1 year trial final assessment." (my italics)

Again for the reasons just outlined, we cannot infer that CBT and GET maintained any benefits at follow-up anymore than we could argue that APT or even the control condition (SMC) maintained their own benefits. The smorgasbord data dish prevent any meaningful inference

3) Finally, in a Commentary that appeared in Lancet Psychiatry alongside the paper (mentioned by Prof Wessely above), Moylon and colleagues suggest hypotheses about the benefits of CBT and GET remain 'unproven' but that CBT and GET may accelerate improvement:
"The authors hypothesise that the improvement in the APT and SMC only groups might be attributed to the effects of post-trial CBT or GET, because more people from these groups accessed these therapies during follow-up. However, improvement was observed in these groups irrespective of whether these treatments were received, and thus this hypothesis remains unproven. ....Overall, our interpretation of these results is that structured CBT and GET seems to accelerate improvement of self-rated symptoms of chronic fatigue syndrome compared with SMC or SMC augmented with APT..."

Round and Round by Ariel Pink's Haunted Graffiti
It's always the same, as always
Sad and tongue tied
It's got a memory and refrain
I'm afraid, you're afraid
And we die and we live and we're born again
Turn me inside out
What can I say...
Merry go 'round
We go up and around we go
 
Moylon et al rightly point out that improvement occurred "irrespective of whether these treatments were received, and thus this hypothesis remains unproven" . Although not apparent from the main paper, the supplementary material throws light on this issue, however, Moylon et al are only half right!

We can see in Table C of the supplementary material (see below) that those in the CBT group, APT and SMC showed significant improvements even when no additional therapies were provided - so, Moylon are correct on that score.  By contrast, the same cannot be said of the GET group. At follow-up, GET shows no significant benefit on measures of fatigue (CFQ) or of physical function (SF-36PF) ...whether they received additional adequate therapy, partial therapy or indeed, no further therapy!

This is even more interesting when we consider that Table C reveals data for 20 GET patients (16%) who had subsequently received 'adequate CBT' - and it clearly produced no significant benefits on their fatigue or their physical function scores. So, what are we to conclude? That CBT is ineffective? CBT is ineffective following GET? That these patients are 'therapy resistant'? Therapy resistant because they received GET?

Whatever the explanation, GET is the only group to show no improvement during follow-up. Even with no additional therapy, both the SMC controls and the APT group improved, as indeed did CBT. The failure of GET patients to respond to adequate additional CBT therapy is curious, not consistent with the claims made and does not look 'promising' for either GET or CBT.

The inferences described above do appear to be holed below the water line.

1) CBT does not appear to accelerate improvement ...at least in people who have previously received GET
2) People who received GET show no continuing improvement post-therapy
and 3) CBT may heighten the incidence of 'negative change' .

  
 
 


 
 

Sunday, 1 November 2015

PACE - Thoughts about Holes



This week Lancet Psychiatry published a long term follow-up study of the PACE trial assessing psychological interventions for Chronic Fatigue Syndrome/ME - it is available at the website following free registration

On reading it, I was struck by more questions than answers. It is clear that these follow-up data show that the interventions of Cognitive behavioural Therapy (CBT), Graded Exercise Therapy (GET) and Adaptive Pacing Therapy (APT) fare no better than Standard Medical Care (SMC). While the lack of difference in key outcomes across conditions seem unquestionable, I am more interested in certain questions thrown up by the study concerning decisions that were made and how data were presented.

A few questions that I find hard to answer from the paper...

1) How is 'unwell' defined? 
The authors state that “After completing their final trial outcome assessment, trial participants were offered an additional PACE therapy. if they were still unwell, they wanted more treatment, and their PACE trial doctor agreed this was appropriate. The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial.” White et al 2011

But how was ‘unwell' defined in practice? Did the PACE doctors listen to patient descriptions about 'feeling unwell' at face-value or did they perhaps refer back to criteria from the previous PACE paper to define 'normal' as patient scores being “within normal ranges for both primary outcomes at 52 weeks” (CFS 18 or less and PF 60+) . Did the PACE Doctors exclude those who said they were still unwell but scored 'normally' or those who said they were well but scored poorly? None of this seems any clearer from the published protocol for the PACE trial.

Holes by Mercury Rev
Holes, dug by little moles, angry jealous
Spies, got telephones for eyes, come to you as
Friends, all those endless ends, that can't be
Tied, oh they make me laugh, an' always make me
....Cry


2) How was additional treatment decided and was it biased?
With regard to the follow-up phase, the authors also state that “The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial”.

But what precisely informed the PACE doctors’ choice and consideration of “what would be most beneficial”?

They say “These choices were made with knowledge of the individual patient’s treatment allocation and outcome, but before the overall trial findings were known” This is intriguing …The doctors know the starting scores of their patients and the finishing scores at 52 weeks. In other words, the decision-making of PACE Doctors was non-blind, and thus informed by the consequences of the trial and how they view their patients have been progressing in each of the four conditions.


3) The authors say” Participants originally allocated to SMC in the trial were the most likely to receive additional treatment followed by those who had APT; those originally allocated to the rehabilitative therapies (CBT and GET) were less likely to receive additional treatment. In so far as the need to seek additional treatment is a marker of continuing illness, these findings support the superiority of CBT and GET as treatments for chronic fatigue syndrome.”

Because more participants were assigned further treatments following some conditions (SMC APT)rather than others (CBT GET), doesn't necessarily imply "support for superiority of CBT and GET" at all. It all depends upon the decision making process underpinning the choice made by PACE clinicians.  The trial has not been clear on whether only those who met criteria for being 'unwell' were offered additional treatment...and what were the criteria? This is especially pertinent since we already know that 13% of patients were entered into the original PACE trial who met criteria for being 'normal'


Opus 40 by Mercury Rev
"Im alive she cried, but I don't know what that means"

We know that the decision making of PACE doctors was not blind to previous treatment and outcome.
It also seems quite possible that participants who had initially been randomly assigned to SMC wanted further treatment because they were so evidently dissatisfied with being assigned to SMC rather than an intervention arm of the trial - before treatment, half of the SMC participants thought that SMC was 'not a logical treatment' for them and only 41% were confident about being helped by receiving SMC.
Such dissatisfaction would presumably be compounded by receiving a mid-trial Newsletter saying how great CBT and GET participants were faring! It appears that mid-trial, the PACE team published a newsletter for participants, which included selected patient testimonials stating how much they had benefited from “therapy” and “treatment”. The newsletter also included an article telling participants that the two interventions pioneered by the investigators and being trialled in PACE (CBT and GET) had been recommended as treatments by a U.K. government committee “based on the best available evidence.” (see http://www.meassociation.org.uk/2015/10/trial-by-error-the-troubling-case-of-the-pace-chronic-fatigue-syndrome-study-investigation-by-david-tuller-21-october-2015/)

So, we also cannot rule out the possibility that the SMC participants were also having to suffer the kind of frustration that regularly makes wait-list controls do worse than they would otherwise have done. They were presumably informed and 'consented' at the start of the trial vis-a-vis the possibility of further (different or same) therapy at the end of the trial if needed? This effectively makes SMC a wait-list control and the negative impact of such waiting in psychotherapy and CBT trials is well-documented (for a recent example http://www.nationalelfservice.net/treatment/cbt/its-all-in-the-control-group-wait-list-control-may-exaggerate-apparent-efficacy-of-cbt-for-depression/)

Let us return to the issue of how 'need' (to seek additional treatment) was defined. undoubtedly the lack of PACE Doctor blinding and the mid-trial newsletters promoting CBT ad GET, along with possible PACE Doctor research allegiance would all accord with greater numbers of CBT (and GET) referrals ...and indeed, CBT being the only therapy that was further offered to some participants - presumably after not being successful the first time!). The decisions appear to have little to do with patients showing a ‘need to seek additional treatment” and nothing at all to do with establishing "superiority of CBT and GET as treatments for chronic fatigue syndrome.”

Finally

4) perhaps I have missed something, but group outcome scores at follow-up seem quite strange. To illustrate with an example, does the follow-up SMC mean CFQ =20.2 (n=115) also include data from 6 participants who switched to APT, 23 to CBT and 14 to GET? If so, how is this any longer labelled as an SMC condition? The same goes for every other condition – they confound follow-up of intervention with change of intervention. What do such scores mean…?  And how can we now draw any meaningful conclusions about any outcomes ...under the heading of the initial group to which they were assigned?