PACE – Thoughts about Holes,  by Professor Keith R Laws (prof of cognitive neuropsychology) in LawsDystopiaBlog,  1 November 2015

This week Lancet Psychiatry published a long term follow-up study of the PACE trial assessing psychological interventions for Chronic Fatigue Syndrome/ME – it is available at the website following free registration

On reading it, I was struck by more questions than answers. It is clear that these follow-up data show that the interventions of Cognitive behavioural Therapy (CBT), Graded Exercise Therapy (GET) and Adaptive Pacing Therapy (APT) fare no better than Standard Medical Care (SMC). While the lack of difference in key outcomes across conditions seem unquestionable, I am more interested in certain questions thrown up by the study concerning decisions that were made and how data were presented.

A few questions that I find hard to answer from the paper…

1) How is ‘unwell’ defined?
The authors state that “After completing their final trial outcome assessment, trial participants were offered an additional PACE therapy. if they were still unwell, they wanted more treatment, and their PACE trial doctor agreed this was appropriate. The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial.” White et al 2011

But how was ‘unwell’ defined in practice? Did the PACE doctors listen to patient descriptions about ‘feeling unwell’ at face-value or did they perhaps refer back to criteria from the previous PACE paper to define ‘normal’ as patient scores being “within normal ranges for both primary outcomes at 52 weeks” (CFS 18 or less and PF 60+) . Did the PACE Doctors exclude those who said they were still unwell but scored ‘normally’ or those who said they were well but scored poorly? None of this seems any clearer from the published protocol for the PACE trial.

2) How was additional treatment decided and was it biased?
With regard to the follow-up phase, the authors also state that “The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial”.

But what precisely informed the PACE doctors’ choice and consideration of “what would be most beneficial”?

They say “These choices were made with knowledge of the individual patient’s treatment allocation and outcome, but before the overall trial findings were known” This is intriguing …The doctors know the starting scores of their patients and the finishing scores at 52 weeks. In other words, the decision-making of PACE Doctors was non-blind, and thus informed by the consequences of the trial and how they view their patients have been progressing in each of the four conditions.

3) The authors say” Participants originally allocated to SMC in the trial were the most likely to receive additional treatment followed by those who had APT; those originally allocated to the rehabilitative therapies (CBT and GET) were less likely to receive additional treatment. In so far as the need to seek additional treatment is a marker of continuing illness, these findings support the superiority of CBT and GET as treatments for chronic fatigue syndrome.”

Because more participants were assigned further treatments following some conditions (SMC APT) rather than others (CBT GET), doesn’t necessarily imply “support for superiority of CBT and GET” at all. It all depends upon the decision making process underpinning the choice made by PACE clinicians.  The trial has not been clear on whether only those who met criteria for being ‘unwell’ were offered additional treatment…and what were the criteria? This is especially pertinent since we already know that 13% of patients were entered into the original PACE trial who met criteria for being ‘normal’

We know that the decision making of PACE doctors was not blind to previous treatment and outcome.

It also seems quite possible that participants who had initially been randomly assigned to SMC wanted further treatment because they were so evidently dissatisfied with being assigned to SMC rather than an intervention arm of the trial – before treatment, half of the SMC participants thought that SMC was ‘not a logical treatment’ for them and only 41% were confident about being helped by receiving SMC.

Such dissatisfaction would presumably be compounded by receiving a mid-trial Newsletter saying how great CBT and GET participants were faring! It appears that mid-trial, the PACE team published a newsletter for participants, which included selected patient testimonials stating how much they had benefited from “therapy” and “treatment”. The newsletter also included an article telling participants that the two interventions pioneered by the investigators and being trialled in PACE (CBT and GET) had been recommended as treatments by a U.K. government committee “based on the best available evidence.” (see
http://www.meassociation.org.uk/?s=trial+by+error

So, we also cannot rule out the possibility that the SMC participants were also having to suffer the kind of frustration that regularly makes wait-list controls do worse than they would otherwise have done.

They were presumably informed and ‘consented’ at the start of the trial vis-a-vis the possibility of further (different or same) therapy at the end of the trial if needed? This effectively makes SMC a wait-list control and the negative impact of such waiting in psychotherapy and CBT trials is well-documented (for a recent example
http://www.nationalelfservice.net/treatment/cbt/its-all-in-the-control-group-wait-list-control-may-exaggerate-apparent-efficacy-of-cbt-for-depression/)

Let us return to the issue of how ‘need’ (to seek additional treatment) was defined. undoubtedly the lack of PACE Doctor blinding and the mid-trial newsletters promoting CBT ad GET, along with possible PACE Doctor research allegiance would all accord with greater numbers of CBT (and GET) referrals …and indeed, CBT being the only therapy that was further offered to some participants – presumably after not being successful the first time!). The decisions appear to have little to do with patients showing a ‘need to seek additional treatment” and nothing at all to do with establishing “superiority of CBT and GET as treatments for chronic fatigue syndrome.”

Finally

4) perhaps I have missed something, but group outcome scores at follow-up seem quite strange. To illustrate with an example, does the follow-up SMC mean CFQ =20.2 (n=115) also include data from 6 participants who switched to APT, 23 to CBT and 14 to GET? If so, how is this any longer labelled as an SMC condition? The same goes for every other condition – they confound follow-up of intervention with change of intervention. What do such scores mean…?  And how can we now draw any meaningful conclusions about any outcomes …under the heading of the initial group to which they were assigned?

 

This entry was posted in News and tagged . Bookmark the permalink.

Comments are closed.