Overconfident? Underconfident? Here's how we tend to miss the mark in predicting performance
Why do most people believe they drive better than average? Are we wrong about our own abilities or about others'? New research uncovers the sources that drive our erroneous performance predictions relative to our peers.
- Previous research finds that, for many tasks and activities, the majority of people tend to predict that they outperform others, especially when tasks are easy. On difficult tasks, however, most people tend to predict that they underperform others.
- A new study disentangles these prediction biases and finds that overconfidence versus others (i.e., wrongly predicting an above-average performance) tends to be primarily driven by overestimating one's own absolute performance, while underconfidence versus others (i.e., wrongly predicting a below-average performance) tends to be primarily driven by overestimating others' absolute performance.
- Understanding the causes of prediction biases can help remedy them.
A famous 1981 study still elicits chuckles: In the study's sample, 93% of U.S. drivers believed they were more skillful than the average driver.
Of course, many of those drivers were wrong. So, what's going on here? First of all, on aggregate, U.S. drivers clearly tend to overestimate themselves. Yet it's also worth noting that many of the 93% of sampled drivers were absolutely correct about their superior skills. Also, those who were wrong about outperforming their peers could make mistakes in a couple of ways: misestimating their own ability behind the wheel and/or misjudging that of others'.
New research by Isabelle Engeler and Gerald Häubl disentangles what is going on with performance prediction errors — both when forecasters display overconfidence and when they sell themselves short versus others. They do this by conducting studies in which runners were surveyed ahead of an organized, timed race. "The beauty of working with data in running," Engeler explains, "is that there is an objective performance measure to compare predictions against — a runners' official finish time stamp." Not only could the researchers compare estimates and actual finishing times of each runner, they could also determine whether a runner was indeed better (or worse) than the average by comparing individual predictions with the average finish time of all runners in that race, and not just relying on aggregate results.
Ready, set, show!
Engeler and Häubl chose a challenging mountain race to study, with uphill distances ranging from 10 to 78 kilometers. This activity is difficult for some, yet not all runners, and hence it allowed the study of prediction errors in both directions — i.e., misplaced overconfidence and underconfidence versus others.
Controlling for age, gender and running experience, here's what they found:
- Runners' overconfidence (i.e., making erroneous predictions that their finishing times would be better than average when they were not) was primarily driven by misestimating their own performances. That is, they didn't end up with the race times they anticipated.
- Runners' underconfidence (i.e., making erroneous predictions that their finishing times would be worse than average) was primarily driven by misestimating others' performances. That is, these runners were surprisingly accurate about their own race times, but they expected more from their competitors.
"Overconfidence is well studied. We have evidence about overconfident entrepreneurs, investors, or drivers. But what struck me as particularly interesting here is our findings on underconfidence — runners who erroneously predicted that they would perform worse than average actually tended to be fairly accurate about their own performance but overhyped the performances of others," Engeler says. And there are practical implications. In a business setting, teams facing difficult tasks might deal with underconfidence (or impostor syndrome) by underestimating just how hard the work is for others as well. Recognizing prediction biases could help counter self-doubt which might arise when comparing with others.
It's noteworthy that this research helps make sense of seemingly contradictory previous results in the field. That is to say, Engeler and Häubl reexamine published data for a couple of previous studies — for example, of students completing trivia questions — to tease out incorrect better- (worse-) than-average beliefs from correct ones and then move beyond aggregate results. The authors' reexaminations provide additional support for their novel findings.
And while it's still true that we can't all be above average, we can come armed with a better understanding of how perceptions go astray.