Prof Jonathan Edwards (UK) and Prof David Tuller (USA) add to the growing number of articles by clinicians and researchers challenging the scientific credentials of the PACE researchers.
PACE team response shows a disregard for the principles of science, by Prof Jonathan Edwards in Journal of Health Psychology [March 28, 2017]
The PACE trial of cognitive behavioural therapy and graded exercise therapy for chronic fatigue syndrome/myalgic encephalomyelitis has raised serious questions about research methodology. An editorial article by Geraghty gives a fair account of the problems involved, if anything understating the case.
The response by White et al. fails to address the key design flaw, of an unblinded study with subjective outcome measures, apparently demonstrating a lack of understanding of basic trial design requirements. The failure of the academic community to recognise the weakness of trials of this type suggests that a major overhaul of quality control is needed.
Observant online blog post, by David Tuller, 27 March 2017: No scientific ground to stand on
David Tuller, lecturer in public health and journalism at the University of California, Berkeley, has written extensively about “the flaws of the PACE trial”. He thinks that “the PACE authors have no scientific ground to stand on”.
The PACE investigators continue in their refusal to actually address the key concerns raised about their study. First, they continue to refer to this as a “secondary” paper. While it is true that the PACE authors for reasons only they know designated “recovery” as a secondary outcome in the PACE protocol, “recovery” is surely not of secondary importance to patients, so dismissing the paper’s significance in this way is unwarranted.
They dismiss the difference in recovery outcomes between their paper and the reanalysis as just a matter of opinion, because the reanalysis used stricter guidelines. They fail to mention that the reanalysis only used the specific criteria the PACE investigators outlined in their own protocol, and then abandoned in favor of ones that allowed them to report statistically significant recovery rates. They received absolutely no approval from oversight committees for this redefinition of recovery.
In their detailed protocol, they included four very clear criteria for recovery. In the paper as published, every one of these four criteria was significantly weakened, in ways documented by Wilshire et al. For two of the four criteria – physical function and fatigue – participants could get worse and yet still meet the “recovery” thresholds because that revised threshold represented worse health than the entry criteria. Thirteen percent of the trial participants met one or both of these “recovery” criteria at baseline.
They have referred to these thresholds as being within the normal range. Yet this is an utterly dishonest argument. They generated their absurdly expansive “normal ranges” by using the wrong calculation to calculate them. They applied the method of finding the normal range for normally distributed populations – the mean plus/minus one standard deviation – and applied it to population samples that they knew were highly skewed in a positive direction. Dr. White himself, in a 2007 paper he co-wrote, had explained how using this method to determine a purported “normal range” for the SF-36 physical function scale yielded distorted findings. This caveat was not included in the Lancet or Psychological Medicine papers.
The authors themselves know that what they are referring to as a “normal range” is not the standard statistical “normal range” that includes two-thirds of the values but a wildly generous “normal range” that includes upwards of 90 percent of all the population values.
That’s why they ended up with the absurd “normal range” of 60. The same strategy applies to the fatigue normal range – they developed in the same intellectually dishonest way, and yet continue to refer to it as a “normal range”. They have never explained why they used the wrong statistical method to develop normal ranges from highly skewed samples. Moreover, Dr. Chalder has never explained why she referred to these absurd “normal ranges” as “getting back to normal” in the Lancet press conference.
They have recently argued, in response to Wilshire et al, that it doesn’t matter that some participants were recovered on the physical function or the fatigue outcomes at baseline because there were other recovery criteria. This is truly a bizarre response for researchers to make. It is also a serious violation of the rules of honest scientific inquiry. It is unclear to me why we all have to waste so much intellectual time and energy simply to demonstrate that studies in which participants can be disabled and recovered simultaneously on key indicators should never have been published and, once published, need to be retracted immediately. The PACE authors have no scientific ground to stand on.