A Collection of Strange Beliefs, Amusing Deceptions, and Dangerous Delusions

From Abracadabra to Zombies | View All

fallacy of suppressed evidence

One of the basic principles of cogent argumentation is that a cogent argument presents all the relevant evidence. An argument that omits relevant evidence appears stronger and more cogent than it is.

The fallacy of suppressed evidence occurs when an arguer intentionally omits relevant data. This is a difficult fallacy to detect because we often have no way of knowing that we haven't been told the whole truth. Who knew that when behavioral therapists were claiming a 60 to 70 prercent success rate for ERP therapy with OCD patients that they were omitting 40-50 percent of the data? (ERP stands for "exposure and response prevention"; defenders of the treatment systematically ignored the numbers of patients who wouldn't participate once they found out what it entailed; they also didn't consider those who dropped out. See Schwartz & Begley, The Mind & The Brain, p. 5.) At least Rupert Sheldrake admitted he omitted 40% of his data when he claimed he had statistical evidence of telepathy in a parrot!

Many advertisements commit this fallacy. Ads inform us of a product's dangers only if required to do so by law. Ads never state that a competitor's product is equally good. The coal, asbestos, nuclear fuel, and tobacco industries have knowingly suppressed evidence regarding the health of their employees or the health hazards of their industries.

Occasionally scientists will suppress evidence, making a study seem more significant than it is. In the December 1998 issue of The Western Journal of Medicine scientists Fred Sicher, Elisabeth Targ, Dan Moore II, and Helene S. Smith published "A Randomized Double-Blind Study of the Effect of Distant Healing in a Population With Advanced AIDS--Report of a Small Scale Study." (See my article on the Sicher-Targ distance healing report.) The authors do not mention, nor has The Western Journal of Medicine ever acknowledged, that the study was originally designed and funded to determine one specific effect: death. The 1998 study was designed to be a follow-up to a 1995 study of 20 patients with AIDS, ten of whom were prayed for by psychic healers. Four of the patients died, a result consistent with chance, but all four were in the control group, a stat that appeared anomalous enough to these scientists to do further study. I don't know whether evidence was suppressed or whether the scientists doing the study were simply incompetent, but the four patients who died were the four oldest in the study. The 1995 study did not control for age when it assigned the patients to either the control or the healing prayer group. Any controlled study on mortality that does not control for age is by definition not a properly designed study.

The follow-up study, however, did suppress evidence, yet it is "widely acknowledged as the most scientifically rigorous attempt ever to discover if prayer can heal" (Bronson 2002). The standard format for scientific reports is to begin with an abstract that summarizes the contents of the report. The Abstract for the Sicher report notes that controls were done for age, number of AIDS-defining illnesses, and cell count. Patients were randomly assigned to the control or healing prayer groups. The study followed the patients for six months. "At 6 months, a blind medical chart review found that treatment subjects acquired significantly fewer new AIDS-defining illnesses (0.1 versus 0.6 per patient, P = 0.04), had lower illness severity (severity score 0.8 versus 2.65, P = 0.03), and required significantly fewer doctor visits (9.2 versus 13.0, P = 0.01), fewer hospitalizations (0.15 versus 0.6, P = 0.04), and fewer days of hospitalization (0.5 versus 3.4, P = 0.04)." These numbers are very impressive. They indicate that the measured differences were not likely due to chance. Whether they were due to healing prayer (HP) is another matter, but the scientists concluded their abstract with the claim: "These data support the possibility of a DH effect in AIDS and suggest the value of further research." Two years later the team, led by Elisabeth Targ, was granted $1.5 million of our tax dollars from the National Institutes of Health Center for Complementary Medicine to do further research on the healing effects of prayer.

What the Sicher study didn't reveal was that the original study had not been designed to do any of these measurements they report as significant. Of course, any researcher who didn't report significant findings just because the original study hadn't set out to investigate them would be remiss. The standard format of a scientific report allows such findings to be noted in the abstract or in the Discussion section of the report. It would have been appropriate for the Sicher report to have noted in the Discussion section that since only one patient died during their study, it appears that the new drugs being given AIDS patients as part of their standard therapy (triple-drug anti-retroviral therapy) were having a significant effect on longevity. They might even have suggested that their finding warranted further research into the effectiveness of the new drug therapy. However, the Sicher report Abstract doesn't even mention that only one of their subjects died during the study, indicating that they didn't recognize a truly significant research finding. It may also indicate that the scientists didn't want to call attention to the fact that their original study was designed to study the effect of healing prayer on the mortality rate of AIDS patients. Since only one patient died, perhaps they felt that they had nothing to report.

It was only after they mined the data once the study was completed that they came up with the suggestive and impressive statistics that they present in their published report. The Texas sharpshooter fallacy seems to have been committed here. Under certain conditions, mining the data would be perfectly acceptable. For example, if your original study was designed to study the effectiveness of a drug on blood pressure but you find after the data is in that the experimental group had no significant decrease in blood pressure but did have a significant increase in HDL (the "good" cholesterol), you would be remiss not to mention this. You would be guilty of deception, however, if you wrote your paper as if your original design was to study the effects of the drug on cholesterol and made no mention of blood pressure.

So, it would have been entirely appropriate for the Sicher report to have noted in the Discussion section that they had discovered something interesting in their statistics:  Hospital stays and doctor visits were lower for the HP group. It was inappropriate to write the report as if that was one of the effects the study was designed to measure when this effect was neither looked for nor discovered until Moore, the statistician for the study, began crunching numbers looking for something of statistical significance after the study was completed. That was all he could come up with. Again, crunching numbers and data mining after a study is completed is appropriate; not mentioning that you rewrote your paper to make it look like it had been designed to crunch those numbers isn't.

It would have been appropriate in the Discussion section of their report to have speculated as to the reason for the statistically significant differences in hospitalizations and days of hospitalization. They could have speculated that prayer made all the difference and, if they were competent, they would have also noted that insurance coverage could make all the difference as well. "Patients with health insurance tend to stay in hospitals longer than uninsured ones" (Bronson 2002). The researchers should have checked this out and reported their findings. Instead, they then took a list of 23 illnesses associated with AIDS and had Sicher go back over each of the forty patient medical charts and use them to collect the data for the 23 illnesses as best he could. This was after it was known to Sicher which group each patient had been randomly assigned to, prayer or control. The fact that the names were blacked out, so he could not immediately tell whose record he was reading, does not seem sufficient to justify allowing him to review the data. There were only 40 patients in the study and he was familiar with each of them. It would have been better had an independent party, someone not involved in the study, gone over the medical charts. Sicher is "an ardent believer in distant healing" and he had put up $7,500 for the pilot study (ibid.) on prayer and mortality. His impartiality was clearly compromised. So was the double-blind quality of the study.

Thus, there was quite a bit of significant and relevant evidence suppressed in the Sicher study that, had it been revealed, might have diminished its reputation as the best designed study ever on prayer and healing. Instead of being held up as a model of promising research in the field of spiritual science, this study might have ended up in the trash heap where it belongs.

new another example

In an effort to encourage reporters to be more critical of President Barack Obama's economic stimulus package, Don Stewart, a spokesman for Senate Republican leader Mitch McConnell of Kentucky, encouraged reporters to determine the wastefulness of the package by getting "out your calculators" and divide the amount of money being spent by the number of jobs created or saved. Doing so produces the ridiculous figure of nearly a quarter of a million dollars per job. (The White House had estimated that $160 billion in stimulus money was spent and that 650,000 jobs were created or preserved.) Fortunately, many reporters didn't take the bait.

Calvin Woodward of the Associated Press, for example, responded by writing an article on some of the things that Stewart was not considering. When you consider all the evidence, the notion that Obama is spending about $250,000 per job can be seen for the distortion that it is.

Woodward notes:

The calculations ignore the fact that the money doesn't go directly to each job holder, but also goes toward material and supplies as well.

The contracts being made will fuel work for months or years. Jobs begun with stimulus money will probably stimulate more jobs in the future, e.g., a construction project may only require a few engineers to get going, but the work force may swell "as ground is broken and building accelerates."

The stimulus package approved by Congress includes money for "research, training, plant equipment, extended unemployment benefits, credit assistance for businesses and more."

The Washington Examiner, however, didn't do any critical thinking and wrote:

Even if we take at face value the White House claim that it created or saved all these jobs with approximately $150 billion of the economic stimulus money, a little simple math shows the taxpayers aren’t getting any bargains here: $150 billion divided by 650,000 jobs equals $230,000 per job saved or created. Instead of taking all that time required to write the 1,588-page stimulus bill, Congress could have passed a one-pager saying the first 650,000 jobless persons to report for work at the White House will receive a voucher worth $230,000 redeemable at the university, community college or trade school of their choice. That would have been enough for a degree plus a hefty down payment on a mortgage.

MediaMatters for America took the Examiner to task for their "misleading cost-per-job stimulus math." The simplistic math doesn't capture the complexity of the effects of the stimulus package.

further reading

prayer entry in The Skeptic's Dictionary

A Prayer Before Dying by Po Bronson (Wired Dec. 2002)

Abstract of the Sicher report in the Western Journal of Medicine

lesson 8: replication revisited

Click here for a complete list of the SD critical thinking mini-lessons

Last updated 27-Oct-2015

© Copyright 1994-2016 Robert T. Carroll * This page was designed by Cristian Popa.