October 16, 2012

In his 2005 book  “Expert Political Judgment: How Good Is It? How Can We Know?” Philip Tetlock painstakingly tracked the predictions of 284 so-called experts in the fields of politics and economics in order to determine how accurate they were.  His study lasted 20 years and included more than 82,000 predictions from this distinguished group of professional seers and soothsayers.  He then compared the predictions to what had actually happened in the real world.  As you have probably guessed, the results were not flattering to the experts.

The experts did no better at accurately predicting future events than a control group of non-experts did.  Put another way, as a group, their predictive powers failed to do better than random chance.  It turned out that what they were really doing wasn’t predicting, it was guessing.

Actually, most of these experts did worse than random chance would suggest.  There was a small group of experts that did better than chance, and this is the most interesting part of the study.

What Tetlock found is that the experts who got things wrong tended to believe in one big idea that explained everything.  When they encountered facts that supported their big idea, their confidence increased.  When they encountered facts that contradicted their big idea, they downplayed those facts.  The result is that over the course of the 20 year study, their confidence increased but their accuracy didn’t.

What about the small group who did better than chance?  They weren’t as confident in their predictive ability, and they didn’t have one big idea that explained everything. Instead, they knew a little about a lot of different things, and they tended to be more open minded and flexible about changing their opinions in the face of changing facts. They usually went with what was working, instead of blindly following a predefined belief system. For this group, acquiring new knowledge actually increased their predictive results.  Unfortunately, these experts were in the minority. And they didn’t receive the same level of attention and notoriety that their more boisterous counterparts did.

So why did these less-accurate experts gain more fame and visibility than their more-accurate peers?  One explanation could be that people don’t really want good predictions.  What they want is confirmation of what they already believe.

Is there a lesson in this study for investors?  If there is, it’s that the most famous, most confident, most ideological experts are the ones most likely to be wrong.  Investing based on expert predictions about where the market is headed, what will happen in the economy, which stocks will do better than others, or which mutual funds are going to outperform next year is not very likely to work out as expected.  It would be far better to consider many opinions from many different sources when making plans about the future.  Keeping an open mind and being adaptable is far more likely to produce winning results than trusting in a guru who has a singular world view.

http://www.newyorker.com/archive/2005/12/05/051205crbo_books1

About the author 

Erik Conley

Former head of equity trading, Northern Trust Bank, Chicago. Teacher, trainer, mentor, market historian, and perpetual student of all things related to the stock market and excellence in investing.

  1. I like the helpful info you provide in your articles. I’ll bookmark your blog and check again here frequently. I am quite sure I will learn a lot of new stuff here. Best of luck!

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}