Virology blog: Trial By Error, Continued: More Nonsense from The Lancet Psychiatry
by David Tuller, 19 January 2016
The PACE authors have long demonstrated great facility in evading questions they don’t want to answer. They did this in their response to correspondence about the original 2011 Lancet paper. They did it again in the correspondence about the 2013 recovery paper, and in their response to my Virology Blog series. Now they have done it in their answer to critics of their most recent paper on follow-up data, published last October in The Lancet Psychiatry.
(They published the paper just a week after my investigation ran. Wasn’t that a lucky coincidence?)
The Lancet Psychiatry follow-up had null findings: Two years or more after randomization, there were no differences in reported levels of fatigue and physical function between those assigned to any of the groups. The results showed that cognitive behavior therapy and graded exercise therapy provided no long-term benefits because those in the other two groups reported improvement during the year or more after the trial was over. Yet the authors, once again, attempted to spin this mess as a success.
In their letters, James Coyne, Keith Laws, Frank Twist, and Charles Shepherd all provide sharp and effective critiques of the follow-up study. I’ll let others tackle the PACE team’s counter-claims about study design and statistical analysis. I want to focus once more on the issue of the PACE participant newsletter, which they again defend in their Lancet Psychiatry response.
Here’s what they write: “One of these newsletters included positive quotes from participants. Since these participants were from all four treatment arms (which were not named) these quotes were [not]…a source of bias.”
Let’s recap what I wrote about this newsletter in my investigation. The newsletter was published in December 2008, with at least a third of the study’s sample still undergoing assessment. The newsletter included six glowing testimonials from participants about their positive experiences with the trial, as well as a seventh statement from one participant’s primary care doctor. None of the seven statements recounted any negative outcomes, presumably conveying to remaining participants that the trial was producing a 100 % satisfaction rate. The authors argue that the absence of the specific names of the study arms means that these quotes could not be “a source of bias.”
This is a preposterous claim. The PACE authors apparently believe that it is not a problem to influence all of your participants in a positive direction, and that this does not constitute bias. They have repeated this argument multiple times. I find it hard to believe they take it seriously, but perhaps they actually do. In any case, no one else should. As I have written before, they have no idea how the testimonials might have affected anyone in any of the four groups—so they have no basis for claiming that this uncontrolled co-intervention did not alter their results.
Moreover, the authors now ignore the other significant effort in that newsletter to influence participant opinion: publication of an article noting that a federal clinical guidelines committee had selected cognitive behavior therapy and graded exercise therapy as effective treatments “based on the best available evidence.” Given that the trial itself was supposed to be assessing the efficacy of these treatments, informing participants that they have already been deemed to be effective would appear likely to impact participants’ responses. The PACE authors apparently disagree.
It is worth remembering what top experts have said about the publication of this newsletter and its impact on the trial results. “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism,” Bruce Levin, a biostatistician at Columbia University, told me.
My Berkeley colleague, epidemiologist Arthur Reingold, said he was flabbergasted to see that the researchers had distributed material promoting the interventions being investigated, whether they were named or not. This fact alone, he noted, made him wonder if other aspects of the trial would also raise methodological or ethical concerns.
“Given the subjective nature of the primary outcomes, broadcasting testimonials from those who had received interventions under study would seem to violate a basic tenet of research design, and potentially introduce substantial reporting and information bias,” he said. “I am hard-pressed to recall a precedent for such an approach in other therapeutic trials. Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”
David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.
Keith R Laws responded 20 January 2016:
David – you raise a key issue concerning the PACE authors’ (lack of) response to the point in our letter about the leaflet and equipoise. The point is that the groups have quite different expectations to start with. When randomly assigned to their groups, when asked if their ‘Treatment is Logical’, those who assigned to Adaptive Pacing Therapy (APT) gave 84% endorsement, Cognitive Behavioural Therapy (CBT) 71% and Graded Exercise Therapy (GET) 84% – by contrast Specialised Medical Care (SMC) was endorsed as a ‘logical treatment’ by fewer than half – 49%. And following treatment, only 50% were satisfied with SMC compared with 82-86% for the other three conditions. Further, at the end of treatment, the proportion dissatisfied with treatment was greater in the SMC than the other three groups *combined*. This is covered in my blog.
My point is that the SMC becomes akin to a wait-list control and this is well-known to often induce detrimental effects in participants – and this is only likely to be worsened by receipt of the leaflet outlining how great everyone is doing, while those who see their assignment as logical are buoyed-up by the positive statements in the leaflet.
Nobody is arguing that the impact of a leaflet could predict the whole pattern of results – the key point is the introduction of unequivocal presence of bias …against Specialised Medical Care and in favour of the other three treatments