A Collection of Strange Beliefs, Amusing Deceptions, and Dangerous Delusions

From Abracadabra to Zombies

Book Review

The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us

by Christopher Chabris and Daniel Simons

Crown 2010

Seventy-two percent of people think that we use only 10 percent of our brain capacity, which proves that 72% of people are wasting the 10% of their brain that's working. :)

The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us might seem like the perfect antidote to Malcolm Gladwell's seductive homage to intuition, Blink. Whereas Gladwell extols the virtues of intuition, Chabris and Simons explore several ways that intuition can lead us astray. In The Tipping Point and Outliers, Gladwell takes advantage of the brain's natural tendency to see patterns where there are none. Chabris and Simons explain why many of those patterns are illusions. Gladwell uses compelling, but selective and artificially isolated stories that seduce the reader into making causal connections and drawing grand inferences. He doesn't confuse the reader with compelling but contrary anecdotes. He convinces the reader of the value and accuracy of intuition and hasty generalization by playing on the reader's natural proneness to what the social scientists blandly refer to as cognitive biases. He knows the reader will connect the dots and draw causal inferences or grand generalizations without blinking, and then blithely go on her merry way thinking she's gained knowledge when all she's really gained is more information wrongly evaluated.

Gladwell uses other techniques in Blink and his other books, to be sure, but presenting a compelling narrativeand Gladwell's stories are compelling, even if selective and often deceptive—takes advantage of the intelligent reader's inclination to fill in the blanks. Gladwell works the reader by taking advantage of how the brain functions. Chabris and Simons, on the other hand, explain how the brain works and how our intuitions can deceive us. But, just as Gladwell occasionally reminds the reader that intuition isn't perfect and generalizations aren't justified from single examples, Chabris and Simons remind the reader that sometimes intuition is a better guide than slow, purposeful, reflective cognition. Confusing the matter even more is the fact that there really isn't a whole lot any of us can do to overcome the illusions Chabris and Simons describe. Both Gladwell and Chabris/Simons know how to tell a story and pack a book with colorful narratives. Trying to draw useful lessons—other than the most obvious ones—from those narratives is a difficult task, though, in either case. The obvious ones most readers of the book will already know: be skeptical of both your own and others' perceptions, memories, and judgments; hold all knowledge tentatively; and have some intellectual humility rather than arrogance. Some lessons are harder than others to accept: personal experience does not trump the results of properly done scientific experiments.

The assumptions most of us make about perception, memory, causality, and testimony, joined with our natural empathy and desire to have a better life, make it easy to seduce us into beliefs and actions that are false, costly, and harmful. Chabris and Simons note that our natural inclinations generally serve us well, but not always. Recognizing some common illusions or biases is a first step in protecting ourselves from the snares of deception.

__________

One of the first things I learned when I began teaching courses in critical thinking in 1974 was that those of us trained in philosophy had a lot to learn from those doing work in the social sciences on cognitive, perceptual, and affective biases or illusions. We had a lot to learn, that is, if we hoped to teach our students to think critically. Identifying fallacies, learning to test deductive arguments for validity, and the like would not be enough. Mastering those tasks would not be enough, that is, unless our goal was to send out into the world a generation of smart-sounding but empty-headed poseurs. Students should be taught how the mind works and why we humanseven the smartest and most educated among uscontinue to make the same kinds of reasoning errors generation after generation despite all our knowledge and training. Without awareness of common pitfalls such as confirmation bias, positive-outcome bias, and subjective validation, a person trained in logic and fallacy detection is easily deceived into thinking that he or she has acquired invincible armor against assaults of unreason. Expressions like post hoc ergo propter hoc and false cause, should be informed by knowledge of evolution and how the brain works. That knowledge should be a basis for explaining why science uses methods like the randomized, double-blind experiment in the search for causal connections, and why science uses the methods it does to investigate alternative causal mechanisms.

Thirty-five years ago, when I began using Howard Kahane's Logic and Contemporary Rhetoric as a textbook in my logic classes, there weren't many books written for the general public that focused on the various ways we are deceived and misled by assumptions we are prone to make about thinking, perception, and the testimony of others. In recent years, several excellent books have filled this gap. The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us is the latest in a short list of entertaining (with their poignant illustrative examples) and educational books that explain how the mind works, and inform us that it doesn't work the way intuition tells us it works. If you've read and appreciated any of the books listed at the end of this review, you will probably want to add The Invisible Gorilla to your bookshelf, Kindle, or iPad.

Before continuing, however, if you have not done the test of selective attention that Chabris and Simons are famous for, you should stop reading this review now and go immediately to the test by clicking here. (When you get to their website, click on the video. Instructions will appear on the screen.)

__________
 

After several chapters of mostly bad news about the trials of overcoming perceptual and cognitive illusions, the penultimate chapter of this book brings some good news: there are some exercises you can do to strengthen your brainpower and thereby reduce your chances of being deceived by the same brain you are trying to train. If you were thinking of Brain Gym, think again. It won't help, nor will knowing your brain type. Neither will listening to Mozart or taking Procera AVH. Nor will using subliminal tapes or hypnosis. Your self-help guru won't provide you with much more than the illusion of control. The scientific evidence shows that about the only thing that consistently leads to improved brain function is physical exercise. Of course, if you're leading an unhealthy life most of the time, even daily workouts at the gym won't help you much.

The bad news is that most of the folks claiming to have the key to unleashing your inner brain so that you can reach your mythical "true potential" are trying to sell you an illusion. Chabris and Simons call it the illusion of potential. Steve Salerno wrote a whole book about the sellers of this illusion. He called it Sham.

Chabris and Simons review several examples of how to "transform a claim with almost no scientific support into a popular legend that fuels multimillion-dollar businesses." They go into great detail explaining the origin and success of the notion of the Mozart effect. They're especially qualified to do so, as Chabris published a meta-analysis of studies that allegedly support the idea that listening to Mozart increases one's IQ. Chabris concluded that rather than proving listening to Mozart was beneficial, the studies could just as well be  used to argue that sitting in silence or relaxing makes you dumber. In other words, the studies amounted to what is referred to in polite society as bullshit. (Chabris appears in Bullshit, season one, episode 5: Baby Bullshit, about the belief that Mozart is good for babies, despite the fact that there has never been a study on babies and the so-called Mozart effect.) The authors found similar pseudoscience behind Nintendo's overly hyped Brain Age software for gaming systems.

About the only uncontested effect of cognitive training is that training in a specific area improves performance in that area but does not transfer to other cognitive tasks.* Even learning to memorize long lists of numbers doesn't help one learn to memorize long lists of letters. "Practice improves specific skills, not general abilities." So, put down that sudoku, unless your goal is to get better at doing sudoku. If you're trying to exercise your brain, you'd do yourself more good by taking a brisk walk.

There is also some positive news for those who use groups to make decisions: you're better off if you have the members of the group think about the issue independently and bring their written thoughts to the table. Many groups use the "brainstorming" technique that brings the members to the table to have them listen to and discuss each other's thoughts on some problem or issue. The members are told not to be critical of their own or other's ideas until all the ideas have been put on the table. This method has been shown not to work as well as having the members come to the table with their independently derived thoughts. The reader can use his or her intuition to figure out why this is the case, or you can read the book for details.

__________
 

The book begins with a discussion of the so-called "invisible gorilla test," which isn't really a test at all. It's not even accurate to refer to "the invisible gorilla," since about half of those who have been tested see the gorilla. The interesting thing about the test is that about half of those who take it don't see the gorilla. Many, in fact, are convinced that two films were used and that they were tested with the film that had no gorilla. Most who don't see the gorilla the first time through are shocked that they could have missed something so salient cross their visual field without them noticing. Chabris and Simons tell us that despite what some people think, there is no difference in personality, intelligence, or educational level that distinguishes those who see and those who don't see the gorilla. There's no gender or age difference. (When I showed a film of the invisible gorilla to my classes, I tried the trick of telling them that females usually do better than males, just to give both groups an incentive to focus more intently on the task at hand.) The only group that might do especially well on this test is the one that is made up of serious basketball players. The reason should be obvious.

The gorilla test exemplifies what is called inattentional blindness. It is just one of several everyday illusions that the authors discuss. These illusions are not just intellectual exercises that don't affect us very often. They are called everyday illusions because they occur all the time and they can have significant effects that run the gamut from mistaken identity to false imprisonment to death on the highway, the runway, or the high seas. Some of these illusions lead to misguided legislation that aims at preventing harm but in fact doesn't prevent harm and may actually lead to more harm. Several legislatures have passed hand-held phone laws for drivers, banning driving while holding a cell phone but allowing driving while talking on a hands-free phone. The legislation is based on the illusion that holding the phone makes it harder to steer a vehicle and thus more dangerous than driving while using a hands-free device. Seventy-seven percent of Americans think it's safer to talk on a hands-free phone than on a handheld phone. The empirical evidence shows otherwise. The evidence shows that the deficit in driving skill has nothing to do with holding or not holding the phone but with the distraction that comes from talking on the phone while driving. The problem is with the eyes, not the hands. The dangers of hands-free phone use while driving might be amplified by another illusion, the confidence illusion, by deluding a driver into thinking that she can drive safely while talking on the phone as long as her hands are free. Despite your belief in your abilities to multitask, "the more attention-demanding things your brain does, the worse it does each one."

Unfortunately, the evidence shows that training people to be more attentive by trying to develop an ability to notice the unexpected doesn't help. We've evolved to notice what we need to notice to survive and multiply. When you read about the young girl killed in  a crosswalk while talking on her cell phone or listening to her iPod, don't wonder what happened to her instincts. Her instincts were probably  just fine, but they didn't evolve for life with those kinds of distractions. Furthermore, Chabris and Simons remind us: "Our neurological circuits for vision and attention are built for pedestrian speeds, not for driving [or flying!] speeds."

The fact that inattentional blindness is unavoidable doesn't mean there aren't important lessons to be learned from studying it. We might be more understanding of people who claim they didn't see something even though it was right before their eyes. They may be telling the truth. On the other hand, we might be less taken aback when we notice something that is right before our eyes that we didn't notice a few seconds or minutes or days ago. What one takes to be a miracle might just be a matter of inattentiveness while perceiving. And we should be realistic about what we should expect from those manning baggage scanners at airports and from our radiologists or dentists reading x-rays. I don't think we should speculate, however, that inattentional blindness explains why others don't see things the way we see them, as Dean Radin does in his lame attempt to explain why skeptics reject psi (Entangled Minds, p. 44).

Perceptual illusions like inattentional blindness (or inattentional deafness) and change blindness reveal some important facts about perception. Our brains have evolved to produce useful representations without requiring faithful duplication of the visual (or auditory, etc.) field available to us at any given moment. The brain isn't storing hundreds or thousands of little details at each moment and constantly comparing those details to see if anything's been missed or has changed. Perception is determined in part by expectation, which makes perceiving the unexpected difficult unless it stands out vividly against the general picture perception provides.

Another illusion Chabris and Simons cover is the memory illusion. Readers of The Skeptic's Dictionary will be familiar with such things as false memory and cryptomnesia. Just as vision does not function like a video camera, memory does not function by recalling replications of faithful representations that have been experienced. Both perception and memory are constructive activities and both are prone to error in the act of constructing a vivid perception or memory. Most of us are deceived into thinking that the more vivid and detailed our memories are, the more accurate they are. The scientific evidence does not support our intuition here. There are many vivid examples of people having vivid but false memories of having been kidnapped, lost in the mall, or meeting some famous person. Studies on memories of salient events like the assassination of JFK or 9/11 show that many of these memories are inaccurate, despite their vividness and the confidence we have in them. This illusion of confidence not only leads us to put more faith in our own memories than we should, it also leads us to put more faith in the testimony of others when they exert certainly and appear self-assured. One of the more difficult points I had in getting my critical thinking and philosophy of law students to accept was the fact that eyewitness testimony has the strongest influence on jurors but is known to be unreliable. Eyewitness testimony is certainly less reliable than physical or circumstantial evidence, even if our intuition tells us otherwise. (A factoid illustrates this point: eyewitness identifications and their confident presentation to juries account for over 75% of wrongful convictions later overturned by DNA evidence.)

__________

In my view, the greatest hindrance to critical thinking is ignorance. You can't think very well about any subject if you don't have the necessary knowledge. On the other hand, many people with a great deal of knowledge prove to be ignorant of what they don't know, and the consequences of that ignorance can be disastrous. It should go without saying that information is not the same as knowledge, otherwise we'd all be extremely knowledgeable. Most of us are overwhelmed with information, but we shouldn't assume that being familiar with dozens of factoids is equivalent to understanding anything.

Having knowledge and thinking one has knowledge are often worlds apart. Chabris and Simons provide several amusing examples of scientists and other experts "overestimating their knowledge in their own fields of expertise." Whenever people think they know more than they do, they're under the influence of the illusion of knowledge. I won't bore the reader with the obvious examples of arrogant ignoramuses who have led us and kept us in wars where there isn't even a sensible meaning to the word 'win' or 'victory.' Most readers are familiar with the cold fusion claims of Pons and Fleishman, Jacques Benveniste's water memory claims, and Blondlot's "discovery" of N-rays. On the other hand, there is an obvious benefit to the illusion of knowledge. True, it gives us more confidence than we should have and creates an unrealistic view of our world, but it provides us with an optimistic viewpoint that allows us to get out of bed in the morning and go through the day oblivious to what wicked morons we are. Depressed people, many of you will be happy to know, don't usually suffer from the illusion of knowledge. Still, it is depressing to realize that many people running wars or political campaigns, or betting on horses, commodities, or stocks are prone to mistake their good luck for skill and knowledge. This has the added negative effect of giving them the illusion of confidence. It is a hard lesson to accept, but the success of many people is due to luck, not knowledge. If a thousand people try a thousand different methods and one of them hits the jackpot, it is an illusion to think the winner had more knowledge than the losers. If two psychics pick opposite winners in an athletic contest, one of them may appear to have more knowledge that the other, but the appearance is an illusion.

Playing on the illusion of knowledge, hucksters use "technobabble" to try to sell us expensive audio equipment or HDMI cables, for example, when the evidence shows that there's no meaningful difference between the expensive and the inexpensive stuff. Chabris and Simons are particularly offended by the use of "colorful images of blobs of activity on brain scans that can seduce us into thinking we have learned more about the brain (and the mind) than we really have." They call such appeals "brain porn."

Some might be surprised that the authors have many good things to say about those weather forecasters who appear on the nightly news to give us their forecasts. We may make fun of weather persons telling us that there is a 40% chance of rain next Tuesday, but that prediction is based on a much better knowledge base than, say, any claim made by a hedge fund manager or any claim by a general that we'll have accomplished our missions in Afghanistan and Iraq by...(pick any date you want; I guarantee it will be wrong).

__________

Readers of The Skeptic's Dictionary know that causal reasoning causes many people all kinds of problems. For example, no matter how many excellent scientific studies show that acupuncture has no intrinsic clinical value and is a form of placebo medicine, there will always be someone who thinks those studies must be wrong because "acupuncture is the only thing that helps my migraines" or some such thing. The illusion of causality gets full coverage in The Invisible Gorilla. There's the background involving our evolutionary history that has produced a species with stupendous pattern-recognition abilities, so stupendous, in fact, that we often see patterns where there are none. We've also evolved to find meaning in patterns and infer causal relationships from coincidences. "Our understanding of our world is systematically biased to perceive meaning rather than randomness and to infer cause rather than coincidence. And we are usually completely unaware of these biases." Consider how automatic some of these brain processes are:

In fact, visual areas of your brain can be activated by images that only vaguely resemble what they're tuned for. In just one-fifth of a second, your brain can distinguish a face from other objects like chairs or cars. In just an instant more, your brain can distinguish objects that look a bit like faces, such as a parking meter or a three-prong outlet, from other objects like chairs. Seeing objects that resemble faces induces activity in a brain area called the fusiform gyrus that is highly sensitive to real faces. In other words, almost immediately after you see an object that looks anything like a face, your brain treats it like a face and processes it differently than other objects.

Add a little religious or political zealotry to the brain's natural disposition to recognize faces in just about anything with a shape and you've got the recipe for a dozen tortillas with Our Lady of Guadalupe imprinted on them or a single toasted cheese sandwich that reminds people of President Obama.

On a serious note, just as we mistakenly trust a person who exudes confidence, we mistakenly think a person has expertise because he or she has lots of information. "Yet the mark of true expertise is not the ability to consider more options, but the ability to filter out irrelevant ones." This is true whether we are evaluating the expertise of others or are trying to do an honest self-assessment. As noted above, one of the hardest lessons to learn is that personal experience is not always the best guide as to what's true or what's even relevant to what's true. For example, many people are convinced their aches and pains are affected by the weather. Telling them that scientific studies haven't found any connection between changes in the weather and changes in people's aches and pains won't convince anyone that the two are not related. They've experienced it, and that's that. No amount of discourse on the post hoc fallacy, the regressve fallacy, confirmation bias, or the placebo effect will change their minds. It's almost as if there's a conspiracy in nature to lead people into error.

Speaking of conspiracy theories....the authors consider conspiracy theories to be "cognitive versions of the Virgin Mary Grilled Cheese." They result from pattern recognition gone awry. "Most conspiracy theories are based on detecting patterns in events that, when viewed with the theory in mind, seem to help us understand why they happened. In essence, conspiracy theories infer cause from coincidence."

The bottom line, according to the authors, is that the only way to properly determine causal relations is to do an experiment. Getting this across and accepted by most people is like teaching a camel to smile (very unnatural). Even epidemiological studies should be viewed with caution. They can indicate strong associations that should be tested for causality, but they don't establish causality in themselves. There have been too many studies claiming, or implying by their narrative, that a causal connection exists only because they found x followed y. It may be true that one successful company (Malcolm Gladwell in The Tipping Point) or many successful companies (Jim Collins in From Good to Great) followed a similar pattern. But without comparing other companies that either had x but didn't produce y, or produced y but didn't have x, we have no idea whether we're dealing with a causal event or a coincidence.

I would agree that the best way to test for causality is to do a randomized double-blind controlled experiment and that too many authors make unjustified grandiose claims based on finding similar qualities among similar groups of things without having a comparison group. But, as the authors point out, sometimes it is not possible or ethical to do a proper experiment. Sometimes observational studies, whether prospective or retrospective, are the best we can expect. Combined with epidemiological data, we can often justifiably conclude that there is a high degree of probability that, for example, smoking causes lung cancer. Epidemiological studies are especially useful for ruling out causal connections. If one large observational study after another can find no correlation between, say, childhood vaccinations and autism, it is highly probable that there is no causal connection between the two. On the other hand, epidemiological studies can mislead people into jumping to the conclusion that a strong correlation indicates a causal connection. (See the clustering illusion, which is not discussed by Chabris and Simons but should have been. See also the Texas sharpshooter fallacy.)

Why then, do so many people, think there are causal connections when both epidemiological studies and experiments have not even found a correlation? The authors explain this type of prevalent causal illusion in terms of the brain's tendency to make connections in a narrative or story that are not explicitly stated by authors or evident in experience. We've already discussed how the brain constructs perceptions out of bits and pieces of sense data. The brain also constructs causal narratives out of bits and pieces of data. Unfortunately, many people look for a single cause or reason for complex events, which misleads them even further. Add a celebrity or two, a doctor in a white coat, and a push from the media, and any number of erroneous causal beliefs can become widespread. Chabris and Simons examine the anti-vaccination movement as an example of how this works. They also review why anecdotes (good stories) have more influence on people than scientific studies. (Oprah is the queen of promoting causal illusions by using stories, celebrities, doctors in surgical garb, and sympathy to play on our natural tendency to see causal connections where there are none.)

Even those of us who know better have found again and again how hard it is to make ourselves try to falsify beliefs. It is not natural to look for counter-stories where disease didn't follow a breast implant or where autism didn't follow a vaccination.

The only quibble I have with Chabris and Simons regarding their discussion of causal illusions is with their claim that other primates don't make many causal inferences. I've had many experiences that contradict that claim, but then I haven't seen the scientific studies that support it.

Chabris and Simons conclude their book with some affirmations. Awareness of everyday illusions should make us less cocksure of our own and other's knowledge. We may even gain insight into why we and others think and act as we do sometimes.

Robert T. Carroll
12 June 2010

read a sample of the book below

__________

* Soon after posting this review, I read an article on Steven Novella's blog Neurologica about yet another study on training to develop cognitive functioning (published in Nature). The authors write:

Here we report the results of a six-week online study in which 11,430 participants trained several times each week on cognitive tasks designed to improve reasoning, memory, planning, visuospatial skills, and attention. Although improvements were observed in every one of the cognitive tasks that were trained, no evidence was found for transfer effects to untrained tasks, even when those tasks were cognitively closely related.

reader comments

2 August 2010
Thanks for a thoughtful review of what sounds like a deep book.  It reminds me of a memory and attention puzzle I've contemplated for over 15 years, a well-defined personal experience I don't yet completely understand.  This letter presumes to ask your comments.

I was commuting nearly an hour each way by car, taking advantage of the time to listen to German language material on cassette (I was studying German and occasionally traveling to Germany).  I heard periodicals, books, and other written material.  My level of German proficiency was such that I had to really listen actively, but even so I found the experience far less distracting than I now find a cell conversation - perhaps because I was not required to respond; many driving incidents over the year I did this suggested that I was not distracted from driving in the way cell calls distract me.  It was like listening to music while driving, but more so.

Now the puzzle: after listening to German audio for an hour's drive, I wouldn't remember the drive itself - only the German.  But if I heard the same tape again later, as I often did, I would then remember many scenes from the first playing - things I hadn't remembered originally.  Time after time, a spoken sentence would call to mind THIS overpass or THAT tree where I had been when I heard the same sentence earlier.

I am fascinated by the fact that those visual images were stored in my memory after all, but weren't available until the proper audio trigger was applied.  I would like to know how, in your opinion, this relates to inattentional blindness.

Thanks very much

Jay Hosler

reply: Your not remembering your drives after arriving at your destination is a classic example of inattentional blindness. Your remembering specific scenes later is related to sensing without seeing. We might call this inattentional perception. Just as we don't see some things right before our eyes because we're paying attention to something else, we sometimes perceive things unconsciously and only become aware of it later during a memory. It's also possible that our "memories" of unconscious perceptions are confabulations. Sensing without seeing might account for some cases of déjà vu.

further reading

Ariely, Dan.  (2008). Predictably Irrational: The Hidden Forces That Shape Our Decisions (HarperCollins).

Bausell, R. Barker. (2007). Snake Oil Science: The Truth about Complementary and Alternative Medicine Oxford.

Burton, Robert. 2008. On Being Certain: Believing You Are Right Even When You're Not. St. Martin's Press.

Dawes, Robyn M. Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail to Think Rationally (Westview Press 2003).

Gilovich, Thomas. How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life (New York: The Free Press, 1993).

Groopman, Jerome. M.D. 2007. How Doctors Think. Houghton Mifflin. My review of this book is here.

Kida, Thomas. 2006. Don't Believe Everything You Think: The 6 Basic Mistakes We Make in Thinking. Prometheus.

Levine, Robert. 2003. The Power of Persuasion - How We're Bought and Sold. John Wiley & Sons.

Sutherland, Stuart. (2007). Irrationality. 2rev edition (Pinter & Martin Ltd).

Taleb, Nassim Nicholas. 2007.  The Black Swan: The Impact of the Highly Improbable. Random House.

Van Hecke, Madeleine L. (2007). Blind Spots: Why Smart People Do Dumb Things. Prometheus.

more book reviews by R. T. Carroll