[soundcloud url=”https://api.soundcloud.com/tracks/180740720″ params=”color=ff0000″ width=”100%” height=”166″ iframe=”true” /]
This episode is brought to you by Stamps.com – where is the fun in living in the future if you still have to go to the post office? Click on the microphone and enter “smart” for a $110 special offer.
This episode is also brought to you by Lynda.com, an easy and affordable way to help individuals and organizations learn. Try Lynda.com free for 7 days.
This episode is also brought to you by Harry’s. Get $5 off the perfect holiday gift. Just go to Harrys.com and type in my coupon code SOSMART with your first purchase of quality shaving products.
It’s difficult to be certain of much in life.
Not only are you mostly uncertain of what will happen tomorrow, or next year, or in five years, but you often can’t be certain of the correct course of action, the best place for dinner, what kind of person you should be, or whether or not you should quit your job or move to a new city. At best, you are only truly certain of a handful of things at any given time, and aside from mathematical proofs – two apples plus two apples equals four apples (and even that, in some circles, can be debated) – you’ve become accustomed to living a life in a fog of maybes.
Most of what we now know about the world replaced something that we thought we knew about the world, but it turned out we had no idea what we were talking about. This is especially true in science, our best tool for getting to the truth. It’s a constantly churning sea of uncertainty. Maybe this, maybe that – but definitely not this, unless… Nothing raises a scientist’s brow more than a pocket of certainty because it’s usually a sign that someone is very wrong.
Being certain is a metacognition, a thought concerning another thought, and the way we often bungle that process is not exclusively human. When an octopus reaches out for a scallop, she does so because somewhere in the chaos of her nervous system a level of certainty crossed some sort of threshold, a threshold that the rock next to the scallop did not. Thanks to that certainty threshold, most of the time she bites into food instead of gravel. We too take the world into our brains through our senses, and in that brain we too are mostly successful at determining the difference between things that are food and things that are not food, but not always. There’s even a Japanese game show where people compete to determine whether household objects are real or are facsimiles made of chocolate. Seriously, check out the YouTube video of a man gleefully biting off a hunk of edible door handle. Right up until he smiles, he’s just rolling the dice, uncertain.
Thanks to the sciences of the mind and brain we now know of several frames in which we might make judgments about the world. Of course, we already knew about this sort of thing in the days of our prescientific stupor. You don’t need a caliper and some Bayesian analysis to know that the same person might choose a different path when angry than she would when calm or that a person in love is likely to make decisions she may regret once released from that spell. You have a decently accurate intuition about those states of mind thanks to your exposure to many examples over the years, but behavioral sciences have dug much deeper. There are frames of mind your brain works to mask from the conscious portions of the self. One such frame of mind is uncertainty.
In psychology, uncertainty was made famous by the work of Daniel Kahneman and Amos Tversky. In their 1982 collection of research, “Judgments under Uncertainty,” the psychologists explained that when you don’t have enough information to make a clear judgment, or when you are making a decision concerning something too complex to fully grasp, instead of backing off and admitting your ignorance, you tend to instead push forward with confidence. The stasis of uncertainty never slows you down because human brains come equipped with anti-uncertainty mechanisms called heuristics.
In their original research they described how, while driving in a literal fog, it becomes difficult to judge the distance between your car and the other cars on the road. Landmarks, especially those deep in the mists, become more hazardous because they seem farther away than they actually are. This, they wrote, is because for your whole life you’ve noticed that things that are very far away appear a bit blurrier than things that are near. A lifetime of dealing with distance has reinforced a simple rule in your head: the closer an object the greater its clarity. This blurriness heuristic is almost always true, except underwater or on a foggy morning or on an especially clear day when it becomes incorrect in the other direction causing objects that are far away to seem much closer than normal.
Kahneman and Tversky originally identified three heuristics: representativeness, availability, and anchoring. Each one seems to help you solve the likelihood of something being true or the odds that one choice is better than another, without actually doing the work required to truly solve those problems. Here is an example of representativeness from their research. Imagine I tell you that a group of 30 engineers and 70 lawyers have applied for a job. I show you a single application that reveals a person who is great at math and bad with people, a person who loves Star Wars and hates public speaking, and then I ask whether it is more likely that this person is an engineer or a lawyer. What is your initial, gut reaction? What seems like the right answer? Statistically speaking, it is more likely the applicant is a lawyer. But if you are like most people in their research, you ignored the odds when checking your gut. You tossed the numbers out the window. So what if there is a 70 percent chance this person is a lawyer? That doesn’t feel like the right answer.
That’s what a heuristic is, a simple rule that in the currency of mental processes trades accuracy for speed. A heuristic can lead to a bias, and your biases, though often correct and harmless, can be dangerous when in error, resulting in a wide variety of bad outcomes from foggy morning car crashes to unconscious prejudices in job interviews.
For me, the most fascinating aspect of all of this is how it renders invisible the uncertainty that leads to the application of the heuristic. You don’t say to yourself, “Hmm, I’m not quite sure whether I am right or wrong, so let’s go with lawyer.” or, “Hmm, I don’t know how far away that car is, so let’s wait a second to hit the brake.” You just react, decide, judge, choose, etc. and move on, sometimes right, sometimes wrong, unaware – unconsciously crossing your fingers and hoping for the best.
These processes lead to a wonderful panoply of psychological phenomena. In this episode of the podcast we explore the halo effect, one of the ways this masking of uncertainty can really get you in trouble. When faced with a set of complex information, you tend to turn the volume down on the things that are difficult to quantify and evaluate and instead focus on the few things (sometimes the one thing) that is most tangible and concrete. You then use the way you feel about what is more-salient to determine how you feel about the things that are less-salient, even if the other traits are unrelated.
Here’s an example. In a study headed by psychologist Barry Staw in 1974, 60 business school students gathered together into three-person groups. Each group received the financial reports of a mid-sized company full of hard data for five years and a letter from the company’s president describing its prospects. The report was from 1969, the task for each group was to estimate the sales and earnings per share for that company in 1970. Since they had 1970’s data on hand, it would be a good exercise to see how much the students had learned in business school. The scientists told the business students that they had already run this experiment once on groups of five people, and that they wanted to see how smaller groups would perform on the same task. Of course, most of this wasn’t true. No matter what the students turned in, the scientists tossed it all out. Instead, each group received a randomly assigned grade. Some were told they did extremely well, and others were told they did very, very poorly.
What Staw discovered was that when the students were told they performed in the top 20 percent of all subjects, the people in the groups attributed that success to things like great communication, overall cohesiveness, openness to change, competence, a lack of conflict, and so on. In groups told that they performed in the bottom 20 percent, the story was just the opposite. They said they performed poorly because of a lack of communication, differences in ability, close-mindedness, sparks of conflict, and a variety of other confounding variables. They believed they had gained knowledge about the hazy characteristics of the group, but in reality they were simply using a measure of performance as a guide for creating attributions from thin air
In his book, The Halo Effect, Phil Rosenzweig described the Staw study like this, “…it’s hard to know in objective terms exactly what constitutes good communication or optimal cohesion…so people tend to make attributions based on other data that they believe are reliable.” That’s how the halo effect works – things like communication skills are weird, nebulous, abstract, and nuanced concepts that don’t translate well into quantifiable, concrete, and measurable aspects of reality. When you make a judgment under uncertainty your brain uses a heuristic and then covers up the evidence so that you never notice that you had no idea what you were doing. When asked to rate their communication skills, a not-so-salient trait, they looked for something more salient to go on. In this case it was the randomly assigned rating. That rating then became a halo whose light altered the way the students saw all the less-salient aspects of their experiences. The only problem was that the rating was a lie, and thus, so was each assessment.
Research into the halo effect suggests this sort of thing happens all the time. In one study a professor had a thick, Belgian accent. If that professor pretended to be mean and strict, American students said his accent was grating and horrendous. If he pretended to be nice and laid-back, similar students said his accent was beautiful and pleasant. In another study scientists wrote an essay and attached one of two photos to it, pretending that the photos were of the person who wrote the work. If the photo was of an attractive woman, people tended to rate the essay as being well-written and deep. If the photo was that of (according to the scientists) an unattractive woman, the essay received poorer scores and people tended to rate as being less insightful. In studies where teachers were told that a student had a learning disability they rated that student’s performance as weaker than did other teachers who were told nothing at all about the student before the assessment began. In each example, people didn’t realize they were using a small, chewable bite of reality to make assumptions about a smorgasbord they couldn’t fully digest.
As an anti-uncertainty mechanism, the halo effect doesn’t just render invisible your lack of insight, but it encourages you to go a step further. It turns uncertainty into false certainty. And, sure, philosophically speaking, just about all certainty is false certainty, but research into the halo effect suggests that whether or not you accept this, as a concept, as a truth – you rarely notice it in the moment when it actually matters.
Fire up the latest episode of the You Are Not So Smart Podcast to learn more about the halo effect, and as an added bonus you’ll hear an additional two-and-a-half hours of excerpts from my book, You Are Now Less Dumb, which is now available in paperback.
Episode shownotes and subscription information: YANSS 038 – The Power of Halos
Image: Shutterstock