Simon McGrath at Phoenix rising comments on the full text (not available free) of Brown and Jason’s paper Validating a measure of myalgic encephalomyelitis/chronic fatigue syndrome symptomatology

Extract from : Validating a measure of ME/CFS symptomatology – Brown & Jason (DePaul Symptom Questionnaire)

Although this study makes lots of interesting points about case definitions, its primary purpose was to validate the DePaul Symptom Questionnaire (DSQ). The questionnaire was broadly validated (I’ll spare you the tedious statistical details), though the team are planning on tweaking it in the light of these results.

The intro criticises Fukuda for its polythetic criteria ie no specific mandatory symptoms beyond unexplained fatigue (basically: ‘Pick four symptoms. Any four will do’). Then focuses on case definition work by Ramsay, Goudsmit, Dowsett and Hyde:

  • Taken together, these theorists propose a more narrow view of ME and the emergent criteria require an individual to experience post-exertional malaise, at least one neurological symptom, at least one autonomic symptom, and the onset of the condition had to have been sudden (developing over one week or less).

Jason prefers this approach to the more extensive symptoms required by CCC and ICC. But ultimately believes we need data-driven approaches to find the most appropriate real-world definition.

The samples: Biobank and DePaul
The main sample used was of 214 subjects from the CFIDS biobank.These were all “diagnosed by a licensed physician specializing in CFS, ME/CFS and ME care”. 74% were female and average age 49. Only 12% were working full or part-time, with 67% on Disability benefit. However, 67% had completed college and 24% also had a graduate or professional degree so this sample is not representative educationally. Note that people volunteer for this, rather than being a random selection and that may skew the results (this educational skew to graduates is true of many studies too eg the recent Andrew Miller fMRI one).

The DePaul sample (n=189) used to validate the results was a convenience sample, relied on self-report of current ME/CFS rather than physician diagnosis and 40% had a graduate degree (76% any degree), so is not exactly ideal.

Results: 3 clusters, modest fit with the data
As covered before, they found three factors/clusters:

  • Neuroendocrine, Autonomic, & Immune Dysfunction
  • Neurological/Cognitive Dysfunction
  • Post-Exertional Malaise

These tie in broadly with the three symptoms highlighted by the theorists: PEM, neurological and autonomic.

However, the ‘fit’ isn’t that good. The success of Factor Analysis in identifying strong clusters is measured by “percentage of variance explained” – or how well the clusters account for the actual data. 100% is a perfect result (so never seen) and 0% is the worst. Typically, 65% or more is seen as very good, and 50% is reasonable.

However, in this study the three clusters only accounted for 42% of the data. The biggest was the broad broad autonomic neuroendocrine/immune cluster accounting for 31% for the – with only 5% accounted for by PEM.

In stage two of the study, they looked to see how well these three factors derived from the BioBank data fitted the DePaul cohort data. While I have a reasonable understanding of the ‘explanatory factor analysis’ used in stage one, the ‘confirmatory factor analysis’ used in stage two is beyond me – so I will take the authors word for it that the fit was “adequate” i.e. OK, but not exactly impressive.

In summary, it looks like the authors are on to something important about how symptoms cluster,  but they haven’t exactly nailed it yet.​

What this means/Next steps
A weakness that the authors mention is that the sample size is on the small size for this particular technique. There is debate about how big a sample is needed, but given the number of symptoms included in this analysis, a sample size of roughly 400 or more would probably be ideal (vs 214 used in stage one).

The good news is that the team are already working on a new analysis using larger datasets.

Another reason the clusters aren’t that strong is that symptoms might not be enough to classify people – maybe you need objective tests too in order to reveal what’s really going on.
The authors said:

  • Future studies might be directed towards validating this DSQ using more objective measures. For example, performance on neuropsychological measures could be related to neurocognitive factor-based scores, and immunologic testing following exercise testing could be associated with post-exertional malaise factor-based scores

Finally, the authors emphasise their view that case definitions should be driven by data:

  • The current findings may contribute to the development of a data-driven case definition for this illness. Future work in this field should continue to utilize symptom data from well-characterized patient samples as the basis of case definition, rather than relying on clinical consensus.

To which I would simply add their earlier point about using objective markers alongside symptoms.

This entry was posted in News and tagged , , , . Bookmark the permalink.

Comments are closed.