From Abracadabra to Zombies
Skeptimedia
Skeptimedia
is a commentary on
mass media treatment of issues concerning science, the
paranormal, and the supernatural.
ยปSkeptimedia
archives
Skeptimedia replaces Mass Media Funk and Mass Media Bunk. Those blogs are now archived.
Statistics and Medical Studies
November 18, 2008. The belief that group therapy can help cancer patients live longer is widely held by followers of the belief that stress is a killer we can overcome if only we would relax. This is the view of both Herbert Benson, who thinks he made a great discovery called "the relaxation response," and David Spiegel, who thinks group therapy can increase the longevity of breast cancer patients. Spiegel's first study on the subject was published in 1989 in The Lancet. The study included 86 women and found that women in group therapy lived significantly longer than the controls (36.6 months versus 18.9 months). A lot of hoopla followed, but the study needed to be replicated and when it was the results were quite different. In 2001, a larger study found no evidence that group therapy extended the lives of breast cancer patients. Spiegel rationalized that improvements in conventional cancer treatment since the 1980s might be masking the independent impact that group therapy has on the course of disease. He also claimed that since most patients have probably heard that group therapy increases longevity, even those assigned to the control group would look outside the group for social support and group therapy. Spiegel did a third study, but he is still sitting on the data because they don't support his hypothesis. He asked for more funding to extend the study for an additional five years. That was more than eight years ago.*
Now another study, led by psychology professor Barbara Andersen of Ohio State University, has been published that claims "psychological counseling, muscle relaxation and other strategies for reducing stress in breast cancer patients can cut their risk of death from the disease by more than half."* The study followed 227 women who had lumpectomies or modified radical mastectomies and tracked them for a median of 11 years. The women were divided into two groups, one group met with a pair of psychologists 26 times during the first year after surgery. The control group didn't meet with psychologists. Both groups received standard medical follow-up care. The results were that 19 of the 114 (17%) patients who received counseling died of breast cancer compared with 25 of the 113 (22%) patients who didn't. Six more patients, or 2.6% of the whole group, died in the control group. To claim that these data add up to cutting anyone's risk of dying of cancer in half seems to be a bit of a stretch. I don't think the data are robust enough to warrant claiming there is a causal connection between the counseling and the fact that there were 5% more deaths in the control group.
The problem with drawing a grand conclusion that claims breast cancer patients cut their risk in half by this relaxation therapy is that the number of women in the study is relatively small, while the duration of the study is arbitrary. If the researchers follow the patients for one or two more years, the data might yield a very different result. If only 2% of the counseling group had died against 22% of the control group, then these data would be impressive and would strongly indicate that something besides chance was going on, though we'd still want to see the study replicated.
Another example of misleading statistics in a scientific medical study was published last week regarding statins. The study involved nearly 18,000 people worldwide and included men 50 and older and women 60 and older who did not have high cholesterol or histories of heart disease. "What they did have was high levels of a protein called high-sensitivity C-reactive protein, or CRP, which indicates inflammation in the body."* (Many scientists believe that inflammation is a better indicator than cholesterol of who will develop heart disease.)
The study found that the risk of heart attack was more than cut in half for people who took statins. The clinical trial, which was designed to last for five years, was halted after less than two years by an independent safety monitoring board on the grounds that the statin is clearly beneficial. But is it? Had the study continued for the duration, the numbers may have evened out and problematic side-effects might have proved too many and too severe to warrant prescribing statins to healthy people. Ben Goldacre noted that even the present numbers aren't that impressive:
On placebo, your risk of a heart attack in the trial was 0.37 events per 100 person years, and if you were taking rosuvastatin [Crestor], it fell to 0.17 events per 100 person years. 0.37 to 0.17. Woohoo. And you have to take a pill every day. And it might have side effects.
The vast majority of those who did not get a statin did quite well. One of the more important statistics is the number needed to treat, a measure of how many people need to be treated for just one person to be helped. Goldacre estimated that 200 people would have to take the statin to save one life. The New England Journal (where the study was published) editorial concluded that treating 120 people for about two years would help one person. The study authors, using different criteria, came up with a figure of 95.*
Halting the study early couldn't have disappointed AstraZeneca, the drug firm that sponsored the study and supplied the only statin used in the trial. Rosuvastatin sells for more than $3 a pill. The leader of the study, Dr. Paul M. Ridker, director of the Center for Cardiovascular Disease Prevention at Brigham and Women's Hospital in Boston, said expanding statin use could prevent about 250,000 heart attacks, strokes, vascular procedures or cardiac deaths over five years. Ridker is probably not too unhappy about stopping the study early either. He is one of the foremost advocates of C-reactive protein testing. That test runs from $20 to $50 a pop.*
How many people will suffer side effects, such as liver disease or muscle pain, but not be helped by taking a daily statin is a statistic I haven't been able to find. Maybe the researchers weren't looking for it.
You suggest that following 227 patients over a median of 11 years represents a sample that is too small.
I'm not sure what rule you use to declare a sample too small, and I'm worried that you might be dissatisfied with the sample size because of the conclusion found.
reply: I used the expression "relatively small" intentionally. I don't claim that the samples in this study were too small to be of importance. Elsewhere I've stated that I agree with the position that a high-caliber controlled study should have at least 25 in each group in the study.
There are rules about sample sizes that you can follow. In particular, when you are observing discrete events, it is the number of events rather than the number of patients or the amount of follow-up time that is most important.
One rule is that you need roughly 25 to 50 events in each group in order to have reasonable precision. This rule is based on the binomial distribution, and may be a bit too harsh for survival time data.
Anyway, I think that having 19 and 25 events respectively is a pretty good (but not terrific) sample size, though it would help if the researchers presented confidence intervals rather than p-values.
So what is your rule for deciding whether a sample size is inadequate?
reply: In this case, I'd agree that if there were only 2 events (deaths) in the counseling group versus 25 in the control group, these sample sizes would be more than adequate to justify some grand conclusions about the probability of counseling being a significant causal factor in the outcome. I've rewritten some of the article to make this point clear.
The other comment that I felt was unfair was your suggestion that the amount of follow-up time was arbitrary, and that different follow-up times might produce different results.
It's certainly possible that a different follow-up time would produce a different result. Short term studies tend to overstate the effectiveness of an intervention, for example. This is a long term study though (11 years median time is pretty impressive). I doubt that asking for 13 years of data on average or 9 years of data on average would lead to markedly different results. You'd have to have a rather bizarre survival function to produce markedly different results over such a long time frame.
Certainly if the median time of follow-up was 6 months, I'd have a problem, especially since the study couldn't be blinded. Placebo effects are probably stronger in short term studies.
reply: Again, maybe I wasn't as clear as I could have been. The difference between the two groups is pretty small but it might be even smaller (or larger) if there was a follow-up or two. I've added a sentence or two that I hope will make it clear why I think the grand conclusion drawn by the researcher was not justified.
It's possible to look at the survival curves and speculate what is going on. But unless the survival curves cross (one group shows superior results for the first three years and the other group shows superior results thereafter), I'd not bring up the issue at all.
This is all a matter of opinion, of course. You can set whatever standard you like (at least 20 years of follow-up with at least 500 patients in each group, for example). But I would suggest that such a standard would be unrealistic and would make force you to ignore 99% of all research studies.
You're on stronger grounds pointing out that the previous literature is mixed on the subject. I don't buy the argument that advances in cancer development have made it harder to show effectiveness of a psychological intervention. Worst case scenario is that improvements would lead to fewer people dying, making it harder to reach the standard of 25 to 50 events per group. But since the studies are much larger, that's a moot point.
I do enjoy your newsletter, website, and book, and I hope you take these comments constructively. -- Steve Simon
reply: Thanks for giving me the opportunity to clarify a couple of points.